Mar 3 12:46:10.124325 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 3 12:46:10.124370 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Mar 3 11:03:33 -00 2026 Mar 3 12:46:10.124395 kernel: KASLR disabled due to lack of seed Mar 3 12:46:10.124411 kernel: efi: EFI v2.7 by EDK II Mar 3 12:46:10.124427 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Mar 3 12:46:10.124442 kernel: secureboot: Secure boot disabled Mar 3 12:46:10.124459 kernel: ACPI: Early table checksum verification disabled Mar 3 12:46:10.124474 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 3 12:46:10.124489 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 3 12:46:10.124504 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 3 12:46:10.124519 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 3 12:46:10.124538 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 3 12:46:10.124553 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 3 12:46:10.124568 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 3 12:46:10.124586 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 3 12:46:10.124602 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 3 12:46:10.124623 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 3 12:46:10.124639 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 3 12:46:10.124655 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 3 12:46:10.124670 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 3 12:46:10.124687 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 3 12:46:10.124702 kernel: printk: legacy bootconsole [uart0] enabled Mar 3 12:46:10.124718 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 3 12:46:10.124735 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:10.124751 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 3 12:46:10.124767 kernel: Zone ranges: Mar 3 12:46:10.124783 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 3 12:46:10.124804 kernel: DMA32 empty Mar 3 12:46:10.124820 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 3 12:46:10.124835 kernel: Device empty Mar 3 12:46:10.124851 kernel: Movable zone start for each node Mar 3 12:46:10.124867 kernel: Early memory node ranges Mar 3 12:46:10.124883 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 3 12:46:10.124898 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 3 12:46:10.124914 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 3 12:46:10.124930 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 3 12:46:10.124946 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 3 12:46:10.124962 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 3 12:46:10.124978 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 3 12:46:10.124998 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 3 12:46:10.125021 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:10.125038 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 3 12:46:10.125055 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 3 12:46:10.125071 kernel: psci: probing for conduit method from ACPI. Mar 3 12:46:10.125092 kernel: psci: PSCIv1.0 detected in firmware. Mar 3 12:46:10.125109 kernel: psci: Using standard PSCI v0.2 function IDs Mar 3 12:46:10.125125 kernel: psci: Trusted OS migration not required Mar 3 12:46:10.125142 kernel: psci: SMC Calling Convention v1.1 Mar 3 12:46:10.125159 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 3 12:46:10.125176 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 3 12:46:10.125192 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 3 12:46:10.125236 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 3 12:46:10.125262 kernel: Detected PIPT I-cache on CPU0 Mar 3 12:46:10.125280 kernel: CPU features: detected: GIC system register CPU interface Mar 3 12:46:10.125298 kernel: CPU features: detected: Spectre-v2 Mar 3 12:46:10.125321 kernel: CPU features: detected: Spectre-v3a Mar 3 12:46:10.125338 kernel: CPU features: detected: Spectre-BHB Mar 3 12:46:10.125354 kernel: CPU features: detected: ARM erratum 1742098 Mar 3 12:46:10.125371 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 3 12:46:10.125388 kernel: alternatives: applying boot alternatives Mar 3 12:46:10.125407 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:10.125424 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 12:46:10.125441 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 12:46:10.125458 kernel: Fallback order for Node 0: 0 Mar 3 12:46:10.125474 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 3 12:46:10.125491 kernel: Policy zone: Normal Mar 3 12:46:10.125512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 12:46:10.125528 kernel: software IO TLB: area num 2. Mar 3 12:46:10.125546 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Mar 3 12:46:10.125562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 3 12:46:10.125579 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 12:46:10.125597 kernel: rcu: RCU event tracing is enabled. Mar 3 12:46:10.125614 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 3 12:46:10.125631 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 12:46:10.125649 kernel: Tracing variant of Tasks RCU enabled. Mar 3 12:46:10.125666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 12:46:10.125683 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 3 12:46:10.125703 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:10.125720 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:10.125737 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 3 12:46:10.125754 kernel: GICv3: 96 SPIs implemented Mar 3 12:46:10.125770 kernel: GICv3: 0 Extended SPIs implemented Mar 3 12:46:10.125787 kernel: Root IRQ handler: gic_handle_irq Mar 3 12:46:10.125803 kernel: GICv3: GICv3 features: 16 PPIs Mar 3 12:46:10.125820 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 3 12:46:10.125837 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 3 12:46:10.125853 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 3 12:46:10.125870 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 3 12:46:10.125888 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 3 12:46:10.125909 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 3 12:46:10.125926 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 3 12:46:10.125942 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 3 12:46:10.125959 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 12:46:10.125975 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 3 12:46:10.125992 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 3 12:46:10.126009 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 3 12:46:10.126026 kernel: Console: colour dummy device 80x25 Mar 3 12:46:10.126044 kernel: printk: legacy console [tty1] enabled Mar 3 12:46:10.126061 kernel: ACPI: Core revision 20240827 Mar 3 12:46:10.126079 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 3 12:46:10.126100 kernel: pid_max: default: 32768 minimum: 301 Mar 3 12:46:10.126117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 12:46:10.126134 kernel: landlock: Up and running. Mar 3 12:46:10.126151 kernel: SELinux: Initializing. Mar 3 12:46:10.126168 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:10.126185 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:10.126203 kernel: rcu: Hierarchical SRCU implementation. Mar 3 12:46:10.126757 kernel: rcu: Max phase no-delay instances is 400. Mar 3 12:46:10.126777 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 12:46:10.126801 kernel: Remapping and enabling EFI services. Mar 3 12:46:10.126818 kernel: smp: Bringing up secondary CPUs ... Mar 3 12:46:10.126835 kernel: Detected PIPT I-cache on CPU1 Mar 3 12:46:10.126852 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 3 12:46:10.126869 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 3 12:46:10.126886 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 3 12:46:10.126904 kernel: smp: Brought up 1 node, 2 CPUs Mar 3 12:46:10.126942 kernel: SMP: Total of 2 processors activated. Mar 3 12:46:10.126960 kernel: CPU: All CPU(s) started at EL1 Mar 3 12:46:10.126992 kernel: CPU features: detected: 32-bit EL0 Support Mar 3 12:46:10.127011 kernel: CPU features: detected: 32-bit EL1 Support Mar 3 12:46:10.127032 kernel: CPU features: detected: CRC32 instructions Mar 3 12:46:10.127051 kernel: alternatives: applying system-wide alternatives Mar 3 12:46:10.127070 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 3 12:46:10.127088 kernel: devtmpfs: initialized Mar 3 12:46:10.127106 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 12:46:10.127129 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 3 12:46:10.127147 kernel: 16880 pages in range for non-PLT usage Mar 3 12:46:10.127165 kernel: 508400 pages in range for PLT usage Mar 3 12:46:10.127183 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 12:46:10.127201 kernel: SMBIOS 3.0.0 present. Mar 3 12:46:10.127265 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 3 12:46:10.127284 kernel: DMI: Memory slots populated: 0/0 Mar 3 12:46:10.127303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 12:46:10.127322 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 3 12:46:10.127346 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 3 12:46:10.127365 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 3 12:46:10.127384 kernel: audit: initializing netlink subsys (disabled) Mar 3 12:46:10.127402 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Mar 3 12:46:10.127420 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 12:46:10.127438 kernel: cpuidle: using governor menu Mar 3 12:46:10.127456 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 3 12:46:10.127474 kernel: ASID allocator initialised with 65536 entries Mar 3 12:46:10.127492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 12:46:10.127514 kernel: Serial: AMBA PL011 UART driver Mar 3 12:46:10.127532 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 12:46:10.127551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 12:46:10.127569 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 3 12:46:10.127606 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 3 12:46:10.127630 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 12:46:10.127648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 12:46:10.127667 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 3 12:46:10.127685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 3 12:46:10.127708 kernel: ACPI: Added _OSI(Module Device) Mar 3 12:46:10.127726 kernel: ACPI: Added _OSI(Processor Device) Mar 3 12:46:10.127743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 12:46:10.127762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 12:46:10.127779 kernel: ACPI: Interpreter enabled Mar 3 12:46:10.127798 kernel: ACPI: Using GIC for interrupt routing Mar 3 12:46:10.127816 kernel: ACPI: MCFG table detected, 1 entries Mar 3 12:46:10.127834 kernel: ACPI: CPU0 has been hot-added Mar 3 12:46:10.127851 kernel: ACPI: CPU1 has been hot-added Mar 3 12:46:10.127873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 3 12:46:10.128168 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 12:46:10.129459 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 3 12:46:10.129663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 3 12:46:10.129874 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 3 12:46:10.130061 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 3 12:46:10.130087 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 3 12:46:10.130115 kernel: acpiphp: Slot [1] registered Mar 3 12:46:10.130134 kernel: acpiphp: Slot [2] registered Mar 3 12:46:10.130153 kernel: acpiphp: Slot [3] registered Mar 3 12:46:10.130171 kernel: acpiphp: Slot [4] registered Mar 3 12:46:10.130189 kernel: acpiphp: Slot [5] registered Mar 3 12:46:10.130237 kernel: acpiphp: Slot [6] registered Mar 3 12:46:10.130261 kernel: acpiphp: Slot [7] registered Mar 3 12:46:10.130280 kernel: acpiphp: Slot [8] registered Mar 3 12:46:10.130298 kernel: acpiphp: Slot [9] registered Mar 3 12:46:10.130316 kernel: acpiphp: Slot [10] registered Mar 3 12:46:10.130340 kernel: acpiphp: Slot [11] registered Mar 3 12:46:10.130357 kernel: acpiphp: Slot [12] registered Mar 3 12:46:10.130375 kernel: acpiphp: Slot [13] registered Mar 3 12:46:10.130393 kernel: acpiphp: Slot [14] registered Mar 3 12:46:10.130411 kernel: acpiphp: Slot [15] registered Mar 3 12:46:10.130429 kernel: acpiphp: Slot [16] registered Mar 3 12:46:10.130447 kernel: acpiphp: Slot [17] registered Mar 3 12:46:10.130465 kernel: acpiphp: Slot [18] registered Mar 3 12:46:10.130483 kernel: acpiphp: Slot [19] registered Mar 3 12:46:10.130504 kernel: acpiphp: Slot [20] registered Mar 3 12:46:10.130522 kernel: acpiphp: Slot [21] registered Mar 3 12:46:10.130540 kernel: acpiphp: Slot [22] registered Mar 3 12:46:10.130558 kernel: acpiphp: Slot [23] registered Mar 3 12:46:10.130576 kernel: acpiphp: Slot [24] registered Mar 3 12:46:10.130593 kernel: acpiphp: Slot [25] registered Mar 3 12:46:10.130611 kernel: acpiphp: Slot [26] registered Mar 3 12:46:10.130629 kernel: acpiphp: Slot [27] registered Mar 3 12:46:10.130647 kernel: acpiphp: Slot [28] registered Mar 3 12:46:10.130664 kernel: acpiphp: Slot [29] registered Mar 3 12:46:10.130687 kernel: acpiphp: Slot [30] registered Mar 3 12:46:10.130704 kernel: acpiphp: Slot [31] registered Mar 3 12:46:10.130722 kernel: PCI host bridge to bus 0000:00 Mar 3 12:46:10.130938 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 3 12:46:10.131115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 3 12:46:10.131309 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:10.131479 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 3 12:46:10.131709 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 3 12:46:10.131921 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 3 12:46:10.132112 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 3 12:46:10.132636 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 3 12:46:10.132833 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 3 12:46:10.133021 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:10.134297 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 3 12:46:10.134543 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 3 12:46:10.134733 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 3 12:46:10.134941 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 3 12:46:10.135133 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:10.135337 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 3 12:46:10.135507 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 3 12:46:10.135683 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:10.135707 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 3 12:46:10.135727 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 3 12:46:10.135745 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 3 12:46:10.135763 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 3 12:46:10.135781 kernel: iommu: Default domain type: Translated Mar 3 12:46:10.135800 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 3 12:46:10.135817 kernel: efivars: Registered efivars operations Mar 3 12:46:10.135835 kernel: vgaarb: loaded Mar 3 12:46:10.135858 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 3 12:46:10.135876 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 12:46:10.135894 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 12:46:10.135912 kernel: pnp: PnP ACPI init Mar 3 12:46:10.136114 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 3 12:46:10.136140 kernel: pnp: PnP ACPI: found 1 devices Mar 3 12:46:10.136158 kernel: NET: Registered PF_INET protocol family Mar 3 12:46:10.136176 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 12:46:10.136200 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 12:46:10.137289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 12:46:10.137313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 12:46:10.137332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 12:46:10.137351 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 12:46:10.137369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:10.137387 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:10.137405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 12:46:10.137423 kernel: PCI: CLS 0 bytes, default 64 Mar 3 12:46:10.137448 kernel: kvm [1]: HYP mode not available Mar 3 12:46:10.137467 kernel: Initialise system trusted keyrings Mar 3 12:46:10.137485 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 12:46:10.137504 kernel: Key type asymmetric registered Mar 3 12:46:10.137522 kernel: Asymmetric key parser 'x509' registered Mar 3 12:46:10.137540 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 3 12:46:10.137558 kernel: io scheduler mq-deadline registered Mar 3 12:46:10.137576 kernel: io scheduler kyber registered Mar 3 12:46:10.137593 kernel: io scheduler bfq registered Mar 3 12:46:10.137834 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 3 12:46:10.137861 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 3 12:46:10.137880 kernel: ACPI: button: Power Button [PWRB] Mar 3 12:46:10.137898 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 3 12:46:10.137916 kernel: ACPI: button: Sleep Button [SLPB] Mar 3 12:46:10.137934 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 12:46:10.137953 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 3 12:46:10.138147 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 3 12:46:10.138177 kernel: printk: legacy console [ttyS0] disabled Mar 3 12:46:10.138196 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 3 12:46:10.140258 kernel: printk: legacy console [ttyS0] enabled Mar 3 12:46:10.140306 kernel: printk: legacy bootconsole [uart0] disabled Mar 3 12:46:10.140326 kernel: thunder_xcv, ver 1.0 Mar 3 12:46:10.140345 kernel: thunder_bgx, ver 1.0 Mar 3 12:46:10.140363 kernel: nicpf, ver 1.0 Mar 3 12:46:10.140382 kernel: nicvf, ver 1.0 Mar 3 12:46:10.140671 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 3 12:46:10.140872 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-03T12:46:09 UTC (1772541969) Mar 3 12:46:10.140899 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 3 12:46:10.140919 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 3 12:46:10.140938 kernel: watchdog: NMI not fully supported Mar 3 12:46:10.140957 kernel: NET: Registered PF_INET6 protocol family Mar 3 12:46:10.140976 kernel: watchdog: Hard watchdog permanently disabled Mar 3 12:46:10.140994 kernel: Segment Routing with IPv6 Mar 3 12:46:10.141012 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 12:46:10.141031 kernel: NET: Registered PF_PACKET protocol family Mar 3 12:46:10.141056 kernel: Key type dns_resolver registered Mar 3 12:46:10.141075 kernel: registered taskstats version 1 Mar 3 12:46:10.141094 kernel: Loading compiled-in X.509 certificates Mar 3 12:46:10.141113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 14a741e1e2b172e51b42fe87d143cf4cae2ad92c' Mar 3 12:46:10.141131 kernel: Demotion targets for Node 0: null Mar 3 12:46:10.141150 kernel: Key type .fscrypt registered Mar 3 12:46:10.141168 kernel: Key type fscrypt-provisioning registered Mar 3 12:46:10.141187 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 12:46:10.141205 kernel: ima: Allocated hash algorithm: sha1 Mar 3 12:46:10.141294 kernel: ima: No architecture policies found Mar 3 12:46:10.141313 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 3 12:46:10.141332 kernel: clk: Disabling unused clocks Mar 3 12:46:10.141350 kernel: PM: genpd: Disabling unused power domains Mar 3 12:46:10.141368 kernel: Warning: unable to open an initial console. Mar 3 12:46:10.141386 kernel: Freeing unused kernel memory: 39552K Mar 3 12:46:10.141404 kernel: Run /init as init process Mar 3 12:46:10.141422 kernel: with arguments: Mar 3 12:46:10.141440 kernel: /init Mar 3 12:46:10.141462 kernel: with environment: Mar 3 12:46:10.141480 kernel: HOME=/ Mar 3 12:46:10.141498 kernel: TERM=linux Mar 3 12:46:10.141518 systemd[1]: Successfully made /usr/ read-only. Mar 3 12:46:10.141542 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:10.141563 systemd[1]: Detected virtualization amazon. Mar 3 12:46:10.141582 systemd[1]: Detected architecture arm64. Mar 3 12:46:10.141605 systemd[1]: Running in initrd. Mar 3 12:46:10.141624 systemd[1]: No hostname configured, using default hostname. Mar 3 12:46:10.141644 systemd[1]: Hostname set to . Mar 3 12:46:10.141663 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:10.141682 systemd[1]: Queued start job for default target initrd.target. Mar 3 12:46:10.141701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:10.141726 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:10.141746 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 12:46:10.141770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:10.141789 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 12:46:10.141810 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 12:46:10.141832 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 12:46:10.141851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 12:46:10.141871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:10.141890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:10.141913 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:10.141933 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:10.141952 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:10.141971 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:10.141991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:10.142010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:10.142030 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 12:46:10.142049 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 12:46:10.142069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:10.142093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:10.142112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:10.142131 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:10.142151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 12:46:10.142170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:10.142189 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 12:46:10.142281 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 12:46:10.142307 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 12:46:10.142334 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:10.142354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:10.142374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:10.142393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:10.142414 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:10.142438 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 12:46:10.142457 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 12:46:10.142519 systemd-journald[259]: Collecting audit messages is disabled. Mar 3 12:46:10.142561 systemd-journald[259]: Journal started Mar 3 12:46:10.142601 systemd-journald[259]: Runtime Journal (/run/log/journal/ec22180754e0376a1e618b842629865b) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:10.120443 systemd-modules-load[260]: Inserted module 'overlay' Mar 3 12:46:10.148501 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:10.154485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 12:46:10.163699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 12:46:10.163736 kernel: Bridge firewalling registered Mar 3 12:46:10.168353 systemd-modules-load[260]: Inserted module 'br_netfilter' Mar 3 12:46:10.171631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:10.179698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:10.188978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:10.195524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:10.198807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 12:46:10.213870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:10.221960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:10.237588 systemd-tmpfiles[278]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 12:46:10.250251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:10.265616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:10.274298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:10.295199 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:10.304461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 12:46:10.358706 dracut-cmdline[302]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:10.365431 systemd-resolved[293]: Positive Trust Anchors: Mar 3 12:46:10.365450 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:10.365507 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:10.540251 kernel: SCSI subsystem initialized Mar 3 12:46:10.548247 kernel: Loading iSCSI transport class v2.0-870. Mar 3 12:46:10.560271 kernel: iscsi: registered transport (tcp) Mar 3 12:46:10.582248 kernel: iscsi: registered transport (qla4xxx) Mar 3 12:46:10.582361 kernel: QLogic iSCSI HBA Driver Mar 3 12:46:10.615377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:10.643115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:10.649709 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:10.681267 kernel: random: crng init done Mar 3 12:46:10.681625 systemd-resolved[293]: Defaulting to hostname 'linux'. Mar 3 12:46:10.689561 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:10.699663 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:10.743042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:10.751422 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 12:46:10.835257 kernel: raid6: neonx8 gen() 6509 MB/s Mar 3 12:46:10.852245 kernel: raid6: neonx4 gen() 6584 MB/s Mar 3 12:46:10.869243 kernel: raid6: neonx2 gen() 5458 MB/s Mar 3 12:46:10.886244 kernel: raid6: neonx1 gen() 3960 MB/s Mar 3 12:46:10.903243 kernel: raid6: int64x8 gen() 3665 MB/s Mar 3 12:46:10.920244 kernel: raid6: int64x4 gen() 3669 MB/s Mar 3 12:46:10.937244 kernel: raid6: int64x2 gen() 3600 MB/s Mar 3 12:46:10.955374 kernel: raid6: int64x1 gen() 2767 MB/s Mar 3 12:46:10.955406 kernel: raid6: using algorithm neonx4 gen() 6584 MB/s Mar 3 12:46:10.974244 kernel: raid6: .... xor() 4882 MB/s, rmw enabled Mar 3 12:46:10.974276 kernel: raid6: using neon recovery algorithm Mar 3 12:46:10.982855 kernel: xor: measuring software checksum speed Mar 3 12:46:10.982923 kernel: 8regs : 12921 MB/sec Mar 3 12:46:10.984082 kernel: 32regs : 12446 MB/sec Mar 3 12:46:10.986480 kernel: arm64_neon : 8702 MB/sec Mar 3 12:46:10.986515 kernel: xor: using function: 8regs (12921 MB/sec) Mar 3 12:46:11.078257 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 12:46:11.089472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:11.096587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:11.142799 systemd-udevd[509]: Using default interface naming scheme 'v255'. Mar 3 12:46:11.153325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:11.170124 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 12:46:11.222560 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Mar 3 12:46:11.268254 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:11.275650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:11.408496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:11.415337 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 12:46:11.579280 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 3 12:46:11.581585 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 3 12:46:11.581906 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 3 12:46:11.586659 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 3 12:46:11.592264 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 3 12:46:11.595559 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 3 12:46:11.595868 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 3 12:46:11.602676 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 12:46:11.602736 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:0f:49:b0:71:b1 Mar 3 12:46:11.603038 kernel: GPT:9289727 != 33554431 Mar 3 12:46:11.600987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:11.613167 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 12:46:11.613204 kernel: GPT:9289727 != 33554431 Mar 3 12:46:11.613419 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 12:46:11.613449 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:11.601254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:11.613234 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:11.621913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:11.630960 (udev-worker)[580]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:11.631459 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:11.675947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:11.689267 kernel: nvme nvme0: using unchecked data buffer Mar 3 12:46:11.817437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 3 12:46:11.881740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 3 12:46:11.889256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:11.914222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:11.938749 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:11.941732 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:11.952428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:11.955629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:11.961726 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:11.969980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 12:46:11.979385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 12:46:11.999624 disk-uuid[689]: Primary Header is updated. Mar 3 12:46:11.999624 disk-uuid[689]: Secondary Entries is updated. Mar 3 12:46:11.999624 disk-uuid[689]: Secondary Header is updated. Mar 3 12:46:12.011644 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:12.031034 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:13.034368 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:13.035736 disk-uuid[690]: The operation has completed successfully. Mar 3 12:46:13.221752 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 12:46:13.222307 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 12:46:13.311795 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 12:46:13.336509 sh[958]: Success Mar 3 12:46:13.365083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 12:46:13.365170 kernel: device-mapper: uevent: version 1.0.3 Mar 3 12:46:13.367189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 12:46:13.379283 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 3 12:46:13.489255 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 12:46:13.499546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 12:46:13.521112 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 12:46:13.542253 kernel: BTRFS: device fsid 639fb782-fb4f-4fdd-a572-72667a093996 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (981) Mar 3 12:46:13.548128 kernel: BTRFS info (device dm-0): first mount of filesystem 639fb782-fb4f-4fdd-a572-72667a093996 Mar 3 12:46:13.548191 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:13.573663 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 3 12:46:13.573728 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 12:46:13.573754 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 12:46:13.587099 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 12:46:13.587921 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:13.597458 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 12:46:13.598995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 12:46:13.608806 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 12:46:13.665718 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Mar 3 12:46:13.670781 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:13.670863 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:13.679821 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:13.679893 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:13.689255 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:13.691598 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 12:46:13.693433 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 12:46:13.799271 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:13.808757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:13.898723 systemd-networkd[1152]: lo: Link UP Mar 3 12:46:13.898748 systemd-networkd[1152]: lo: Gained carrier Mar 3 12:46:13.901988 systemd-networkd[1152]: Enumeration completed Mar 3 12:46:13.902141 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:13.904874 systemd[1]: Reached target network.target - Network. Mar 3 12:46:13.922667 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:13.922680 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:13.939273 systemd-networkd[1152]: eth0: Link UP Mar 3 12:46:13.939293 systemd-networkd[1152]: eth0: Gained carrier Mar 3 12:46:13.939316 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:13.970347 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.20.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:13.989351 ignition[1073]: Ignition 2.22.0 Mar 3 12:46:13.989381 ignition[1073]: Stage: fetch-offline Mar 3 12:46:13.990232 ignition[1073]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:13.990258 ignition[1073]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:13.990797 ignition[1073]: Ignition finished successfully Mar 3 12:46:14.001629 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:14.008339 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 3 12:46:14.048173 ignition[1161]: Ignition 2.22.0 Mar 3 12:46:14.048694 ignition[1161]: Stage: fetch Mar 3 12:46:14.049254 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:14.049277 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:14.049399 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:14.072152 ignition[1161]: PUT result: OK Mar 3 12:46:14.075382 ignition[1161]: parsed url from cmdline: "" Mar 3 12:46:14.075398 ignition[1161]: no config URL provided Mar 3 12:46:14.075415 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 12:46:14.075438 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Mar 3 12:46:14.075469 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:14.079732 ignition[1161]: PUT result: OK Mar 3 12:46:14.079807 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 3 12:46:14.082051 ignition[1161]: GET result: OK Mar 3 12:46:14.082192 ignition[1161]: parsing config with SHA512: 00715a58bc7ca86fa4456153fe1a5ac14828bcf9716306fec55ec57b1b6c8478ff7836d69dfbee5dd27fc4de8fbdf0ba7b75839ec05887182cef79b2f69431f3 Mar 3 12:46:14.098983 unknown[1161]: fetched base config from "system" Mar 3 12:46:14.099441 unknown[1161]: fetched base config from "system" Mar 3 12:46:14.100153 ignition[1161]: fetch: fetch complete Mar 3 12:46:14.099455 unknown[1161]: fetched user config from "aws" Mar 3 12:46:14.100165 ignition[1161]: fetch: fetch passed Mar 3 12:46:14.100385 ignition[1161]: Ignition finished successfully Mar 3 12:46:14.112169 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 3 12:46:14.118458 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 12:46:14.184421 ignition[1167]: Ignition 2.22.0 Mar 3 12:46:14.184460 ignition[1167]: Stage: kargs Mar 3 12:46:14.187740 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:14.187781 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:14.187953 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:14.196338 ignition[1167]: PUT result: OK Mar 3 12:46:14.200863 ignition[1167]: kargs: kargs passed Mar 3 12:46:14.201134 ignition[1167]: Ignition finished successfully Mar 3 12:46:14.207422 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 12:46:14.213267 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 12:46:14.271964 ignition[1173]: Ignition 2.22.0 Mar 3 12:46:14.271996 ignition[1173]: Stage: disks Mar 3 12:46:14.272642 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:14.273024 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:14.273869 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:14.280737 ignition[1173]: PUT result: OK Mar 3 12:46:14.289997 ignition[1173]: disks: disks passed Mar 3 12:46:14.290090 ignition[1173]: Ignition finished successfully Mar 3 12:46:14.295447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 12:46:14.300544 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:14.303336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 12:46:14.311542 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:14.313854 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:14.316358 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:14.324332 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 12:46:14.411195 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 12:46:14.415730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 12:46:14.423629 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 12:46:14.593245 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f44cfd4f-a1a9-472a-86a7-c3154f299e07 r/w with ordered data mode. Quota mode: none. Mar 3 12:46:14.594069 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 12:46:14.597042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:14.603073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:14.614377 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 12:46:14.618508 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 12:46:14.618591 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 12:46:14.618643 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:14.643370 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 12:46:14.649419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 12:46:14.676280 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Mar 3 12:46:14.681250 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:14.681308 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:14.692252 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:14.692324 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:14.697365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:14.762921 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 12:46:14.775067 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Mar 3 12:46:14.784563 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 12:46:14.796431 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 12:46:14.979049 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:14.985303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 12:46:14.989008 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 12:46:15.020734 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 12:46:15.024678 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:15.052738 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 12:46:15.082175 ignition[1315]: INFO : Ignition 2.22.0 Mar 3 12:46:15.082175 ignition[1315]: INFO : Stage: mount Mar 3 12:46:15.086694 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:15.086694 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:15.086694 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:15.094313 ignition[1315]: INFO : PUT result: OK Mar 3 12:46:15.099297 ignition[1315]: INFO : mount: mount passed Mar 3 12:46:15.101359 ignition[1315]: INFO : Ignition finished successfully Mar 3 12:46:15.102887 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 12:46:15.110752 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 12:46:15.510375 systemd-networkd[1152]: eth0: Gained IPv6LL Mar 3 12:46:15.597316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:15.643259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Mar 3 12:46:15.647494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:15.647547 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:15.654550 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:15.654624 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:15.658049 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:15.712866 ignition[1343]: INFO : Ignition 2.22.0 Mar 3 12:46:15.712866 ignition[1343]: INFO : Stage: files Mar 3 12:46:15.717253 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:15.717253 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:15.717253 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:15.717253 ignition[1343]: INFO : PUT result: OK Mar 3 12:46:15.729186 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Mar 3 12:46:15.735875 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 12:46:15.735875 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 12:46:15.745960 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 12:46:15.749542 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 12:46:15.753145 unknown[1343]: wrote ssh authorized keys file for user: core Mar 3 12:46:15.755788 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 12:46:15.760087 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:15.764507 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 3 12:46:15.840549 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 12:46:15.980284 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:15.980284 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 12:46:15.989045 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 3 12:46:16.217369 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 3 12:46:16.338605 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 12:46:16.338605 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:16.346636 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 3 12:46:16.375417 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 3 12:46:16.752604 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 3 12:46:17.147325 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 3 12:46:17.147325 ignition[1343]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:17.157813 ignition[1343]: INFO : files: files passed Mar 3 12:46:17.157813 ignition[1343]: INFO : Ignition finished successfully Mar 3 12:46:17.176433 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 12:46:17.193695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 12:46:17.201477 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 12:46:17.230090 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 12:46:17.230412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 12:46:17.245795 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:17.245795 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:17.253042 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:17.259447 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:17.266002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 12:46:17.272631 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 12:46:17.346171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 12:46:17.348462 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 12:46:17.355483 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 12:46:17.360103 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 12:46:17.362612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 12:46:17.368371 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 12:46:17.407420 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:17.417411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 12:46:17.455055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:17.463519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:17.469435 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 12:46:17.471771 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 12:46:17.472065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:17.477280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 12:46:17.480236 systemd[1]: Stopped target basic.target - Basic System. Mar 3 12:46:17.481989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 12:46:17.489071 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:17.494093 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:17.496336 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:17.501147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 12:46:17.506176 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:17.510753 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 12:46:17.515884 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 12:46:17.520376 systemd[1]: Stopped target swap.target - Swaps. Mar 3 12:46:17.525103 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 12:46:17.525429 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:17.533842 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:17.535788 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:17.540296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 12:46:17.550184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:17.550417 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 12:46:17.550628 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:17.562744 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 12:46:17.563100 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:17.566142 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 12:46:17.566432 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 12:46:17.571337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 12:46:17.573434 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 12:46:17.573784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:17.589182 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 12:46:17.596843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 12:46:17.597257 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:17.600865 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 12:46:17.601092 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:17.645666 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 12:46:17.645887 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 12:46:17.660013 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 12:46:17.673536 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 12:46:17.675817 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 12:46:17.683104 ignition[1397]: INFO : Ignition 2.22.0 Mar 3 12:46:17.683104 ignition[1397]: INFO : Stage: umount Mar 3 12:46:17.686855 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:17.686855 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:17.686855 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:17.695027 ignition[1397]: INFO : PUT result: OK Mar 3 12:46:17.702974 ignition[1397]: INFO : umount: umount passed Mar 3 12:46:17.704865 ignition[1397]: INFO : Ignition finished successfully Mar 3 12:46:17.709897 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 12:46:17.710369 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 12:46:17.717546 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 12:46:17.717758 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 12:46:17.725183 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 12:46:17.725319 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 12:46:17.728193 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 3 12:46:17.728291 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 3 12:46:17.734540 systemd[1]: Stopped target network.target - Network. Mar 3 12:46:17.737679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 12:46:17.737767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:17.744536 systemd[1]: Stopped target paths.target - Path Units. Mar 3 12:46:17.746717 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 12:46:17.748981 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:17.752269 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 12:46:17.754825 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 12:46:17.761488 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 12:46:17.761560 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:17.764414 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 12:46:17.764480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:17.770885 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 12:46:17.770979 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 12:46:17.773955 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 12:46:17.774028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:17.780421 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 12:46:17.780505 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:17.786360 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 12:46:17.789515 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 12:46:17.808426 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 12:46:17.808645 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 12:46:17.816983 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 12:46:17.817418 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 12:46:17.817622 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 12:46:17.840237 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 12:46:17.841454 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 12:46:17.847547 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 12:46:17.847631 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:17.860344 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 12:46:17.865205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 12:46:17.867177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:17.880148 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 12:46:17.880297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:17.891982 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 12:46:17.892072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:17.894737 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 12:46:17.894820 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:17.905190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:17.915146 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 12:46:17.918651 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:17.936852 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 12:46:17.939865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:17.943672 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 12:46:17.943812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:17.947641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 12:46:17.947711 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:17.956511 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 12:46:17.956609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:17.972166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 12:46:17.972300 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:17.978893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 12:46:17.978995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:17.987856 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 12:46:17.995039 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 12:46:17.995156 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:17.998130 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 12:46:17.998235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:18.001240 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:18.001326 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:18.013864 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 12:46:18.013978 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 12:46:18.014062 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:18.014735 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 12:46:18.017008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 12:46:18.048561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 12:46:18.050763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 12:46:18.058944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 12:46:18.065198 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 12:46:18.095532 systemd[1]: Switching root. Mar 3 12:46:18.151098 systemd-journald[259]: Journal stopped Mar 3 12:46:20.150532 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Mar 3 12:46:20.150656 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 12:46:20.150691 kernel: SELinux: policy capability open_perms=1 Mar 3 12:46:20.150721 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 12:46:20.150752 kernel: SELinux: policy capability always_check_network=0 Mar 3 12:46:20.150779 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 12:46:20.150809 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 12:46:20.150837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 12:46:20.150894 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 12:46:20.150922 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 12:46:20.150954 kernel: audit: type=1403 audit(1772541978.463:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 12:46:20.150986 systemd[1]: Successfully loaded SELinux policy in 76.881ms. Mar 3 12:46:20.151034 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.901ms. Mar 3 12:46:20.151068 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:20.151100 systemd[1]: Detected virtualization amazon. Mar 3 12:46:20.151129 systemd[1]: Detected architecture arm64. Mar 3 12:46:20.151159 systemd[1]: Detected first boot. Mar 3 12:46:20.151189 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:20.151240 zram_generator::config[1441]: No configuration found. Mar 3 12:46:20.151309 kernel: NET: Registered PF_VSOCK protocol family Mar 3 12:46:20.151339 systemd[1]: Populated /etc with preset unit settings. Mar 3 12:46:20.151371 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 12:46:20.151403 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 12:46:20.151433 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 12:46:20.151465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:20.151496 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 12:46:20.151526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 12:46:20.151559 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 12:46:20.151587 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 12:46:20.151615 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 12:46:20.151647 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 12:46:20.151677 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 12:46:20.151707 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 12:46:20.151736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:20.151767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:20.151795 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 12:46:20.151829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 12:46:20.151859 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 12:46:20.151887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:20.151916 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 12:46:20.151946 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:20.151975 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:20.152003 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 12:46:20.152037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 12:46:20.152067 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:20.152095 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 12:46:20.152125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:20.152154 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:20.152183 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:20.152231 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:20.152264 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 12:46:20.152296 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 12:46:20.152328 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 12:46:20.156750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:20.156786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:20.156814 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:20.156844 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 12:46:20.156875 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 12:46:20.156907 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 12:46:20.156938 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 12:46:20.156966 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 12:46:20.156999 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 12:46:20.157029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 12:46:20.157058 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 12:46:20.157089 systemd[1]: Reached target machines.target - Containers. Mar 3 12:46:20.157117 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 12:46:20.157148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:20.157178 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:20.157206 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 12:46:20.157265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:20.157300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:20.157329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:20.157357 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 12:46:20.157388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:20.157417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 12:46:20.157447 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 12:46:20.157475 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 12:46:20.157504 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 12:46:20.157535 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 12:46:20.157577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:20.157606 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:20.157632 kernel: fuse: init (API version 7.41) Mar 3 12:46:20.157662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:20.157690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:20.157720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 12:46:20.157746 kernel: loop: module loaded Mar 3 12:46:20.157775 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 12:46:20.157808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:20.157838 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 12:46:20.157868 systemd[1]: Stopped verity-setup.service. Mar 3 12:46:20.157897 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 12:46:20.157926 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 12:46:20.157958 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 12:46:20.157986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 12:46:20.158015 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 12:46:20.158043 kernel: ACPI: bus type drm_connector registered Mar 3 12:46:20.158070 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 12:46:20.158102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:20.158130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 12:46:20.158157 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 12:46:20.158184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:20.158240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:20.158323 systemd-journald[1529]: Collecting audit messages is disabled. Mar 3 12:46:20.158391 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 12:46:20.158421 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:20.158452 systemd-journald[1529]: Journal started Mar 3 12:46:20.158496 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec22180754e0376a1e618b842629865b) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:19.513579 systemd[1]: Queued start job for default target multi-user.target. Mar 3 12:46:19.530026 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 3 12:46:19.530898 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 12:46:20.163278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:20.172252 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:20.174107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:20.176336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:20.179757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 12:46:20.180121 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 12:46:20.184258 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:20.184603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:20.188986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:20.193276 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:20.196663 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 12:46:20.200285 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 12:46:20.222561 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:20.228436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 12:46:20.238381 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 12:46:20.241176 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 12:46:20.241298 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:20.245390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 12:46:20.258747 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 12:46:20.262557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:20.272480 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 12:46:20.281506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 12:46:20.284282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:20.291608 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 12:46:20.294283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:20.299948 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:20.306839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 12:46:20.316525 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 12:46:20.323553 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 12:46:20.326661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 12:46:20.364689 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec22180754e0376a1e618b842629865b is 198.857ms for 928 entries. Mar 3 12:46:20.364689 systemd-journald[1529]: System Journal (/var/log/journal/ec22180754e0376a1e618b842629865b) is 8M, max 195.6M, 187.6M free. Mar 3 12:46:20.584414 systemd-journald[1529]: Received client request to flush runtime journal. Mar 3 12:46:20.584519 kernel: loop0: detected capacity change from 0 to 100632 Mar 3 12:46:20.584568 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 12:46:20.584608 kernel: loop1: detected capacity change from 0 to 119840 Mar 3 12:46:20.416076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 12:46:20.419139 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 12:46:20.429955 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 12:46:20.498199 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:20.540833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 12:46:20.545103 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 12:46:20.572909 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:20.581142 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 12:46:20.591675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:20.598446 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 12:46:20.606266 kernel: loop2: detected capacity change from 0 to 61264 Mar 3 12:46:20.667782 kernel: loop3: detected capacity change from 0 to 200864 Mar 3 12:46:20.673935 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Mar 3 12:46:20.674500 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Mar 3 12:46:20.682707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:21.062265 kernel: loop4: detected capacity change from 0 to 100632 Mar 3 12:46:21.089270 kernel: loop5: detected capacity change from 0 to 119840 Mar 3 12:46:21.131251 kernel: loop6: detected capacity change from 0 to 61264 Mar 3 12:46:21.151264 kernel: loop7: detected capacity change from 0 to 200864 Mar 3 12:46:21.159292 ldconfig[1570]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 12:46:21.165309 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 12:46:21.178032 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 3 12:46:21.179095 (sd-merge)[1602]: Merged extensions into '/usr'. Mar 3 12:46:21.189434 systemd[1]: Reload requested from client PID 1575 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 12:46:21.189465 systemd[1]: Reloading... Mar 3 12:46:21.369262 zram_generator::config[1629]: No configuration found. Mar 3 12:46:21.765733 systemd[1]: Reloading finished in 575 ms. Mar 3 12:46:21.806774 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 12:46:21.810254 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 12:46:21.825490 systemd[1]: Starting ensure-sysext.service... Mar 3 12:46:21.830486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:21.840720 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:21.879001 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Mar 3 12:46:21.879032 systemd[1]: Reloading... Mar 3 12:46:21.883477 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 12:46:21.883992 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 12:46:21.884702 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 12:46:21.885355 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 12:46:21.887187 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 12:46:21.887922 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Mar 3 12:46:21.888151 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Mar 3 12:46:21.895680 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:21.895906 systemd-tmpfiles[1683]: Skipping /boot Mar 3 12:46:21.914868 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:21.915057 systemd-tmpfiles[1683]: Skipping /boot Mar 3 12:46:21.968646 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Mar 3 12:46:22.123244 zram_generator::config[1736]: No configuration found. Mar 3 12:46:22.268525 (udev-worker)[1718]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:22.721302 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 12:46:22.721450 systemd[1]: Reloading finished in 841 ms. Mar 3 12:46:22.762701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:22.788336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:22.836612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:22.845664 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 12:46:22.851583 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 12:46:22.859688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:22.880194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:22.894607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 12:46:22.910112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:22.915235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:22.926734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:22.937488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:22.940175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:22.940461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:22.951783 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 12:46:22.958518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:22.958882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:22.959088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:22.974411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:22.976743 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:22.979328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:22.979543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:22.979856 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 12:46:22.995300 systemd[1]: Finished ensure-sysext.service. Mar 3 12:46:23.059302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 12:46:23.079329 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 12:46:23.088729 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 12:46:23.095428 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:23.097318 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:23.116081 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 12:46:23.120333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 12:46:23.144026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:23.148581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:23.153360 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:23.154098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:23.167467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:23.173826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:23.179601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:23.180711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:23.201584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 12:46:23.210344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:23.226783 augenrules[1927]: No rules Mar 3 12:46:23.232176 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:23.236878 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:23.369803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:23.374461 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 12:46:23.420992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 12:46:23.430199 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 12:46:23.464830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:23.551313 systemd-networkd[1859]: lo: Link UP Mar 3 12:46:23.551328 systemd-networkd[1859]: lo: Gained carrier Mar 3 12:46:23.554036 systemd-networkd[1859]: Enumeration completed Mar 3 12:46:23.555045 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:23.555053 systemd-networkd[1859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:23.555381 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:23.560614 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 12:46:23.567670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 12:46:23.570501 systemd-networkd[1859]: eth0: Link UP Mar 3 12:46:23.570776 systemd-networkd[1859]: eth0: Gained carrier Mar 3 12:46:23.570813 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:23.581351 systemd-networkd[1859]: eth0: DHCPv4 address 172.31.20.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:23.585071 systemd-resolved[1860]: Positive Trust Anchors: Mar 3 12:46:23.585107 systemd-resolved[1860]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:23.585168 systemd-resolved[1860]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:23.599487 systemd-resolved[1860]: Defaulting to hostname 'linux'. Mar 3 12:46:23.603139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:23.607551 systemd[1]: Reached target network.target - Network. Mar 3 12:46:23.609654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:23.612362 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:23.614960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 12:46:23.617802 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 12:46:23.621318 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 12:46:23.624853 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 12:46:23.627756 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 12:46:23.630593 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 12:46:23.630654 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:23.632828 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:23.636292 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 12:46:23.641095 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 12:46:23.647778 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 12:46:23.651006 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 12:46:23.653848 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 12:46:23.662313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 12:46:23.665236 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 12:46:23.671259 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 12:46:23.674668 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 12:46:23.679284 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:23.681796 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:23.684052 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:23.684108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:23.686142 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 12:46:23.694505 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 3 12:46:23.703495 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 12:46:23.715468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 12:46:23.726148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 12:46:23.737817 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 12:46:23.742143 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 12:46:23.751825 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 12:46:23.763666 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:23.771097 jq[1969]: false Mar 3 12:46:23.771832 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 12:46:23.781672 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 3 12:46:23.801530 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 12:46:23.810599 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 12:46:23.824417 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 12:46:23.826304 extend-filesystems[1970]: Found /dev/nvme0n1p6 Mar 3 12:46:23.829871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 12:46:23.832901 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 12:46:23.841599 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 12:46:23.858429 extend-filesystems[1970]: Found /dev/nvme0n1p9 Mar 3 12:46:23.855580 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 12:46:23.881533 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Mar 3 12:46:23.881090 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 12:46:23.885751 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 12:46:23.886644 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 12:46:23.905417 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 12:46:23.908360 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 12:46:23.989645 jq[1986]: true Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: available at https://www.nwtime.org/support Mar 3 12:46:23.990008 ntpd[1973]: 3 Mar 12:46:23 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:23.988712 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:23.988811 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:23.988833 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:23.988851 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:23.988866 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:23.988882 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 3 12:46:23.988898 ntpd[1973]: available at https://www.nwtime.org/support Mar 3 12:46:23.988913 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:24.002472 update_engine[1984]: I20260303 12:46:23.997669 1984 main.cc:92] Flatcar Update Engine starting Mar 3 12:46:24.011268 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 3 12:46:24.013529 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 3 12:46:24.015550 ntpd[1973]: basedate set to 2026-02-19 Mar 3 12:46:24.016031 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: basedate set to 2026-02-19 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: Listen normally on 3 eth0 172.31.20.143:123 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: bind(21) AF_INET6 [fe80::40f:49ff:feb0:71b1%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:24.021318 ntpd[1973]: 3 Mar 12:46:24 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::40f:49ff:feb0:71b1%2]:123 Mar 3 12:46:24.015597 ntpd[1973]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:24.016481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 12:46:24.015778 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:24.015823 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:24.016108 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:24.016150 ntpd[1973]: Listen normally on 3 eth0 172.31.20.143:123 Mar 3 12:46:24.016196 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:24.016273 ntpd[1973]: bind(21) AF_INET6 [fe80::40f:49ff:feb0:71b1%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:24.016318 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::40f:49ff:feb0:71b1%2]:123 Mar 3 12:46:24.030092 systemd-coredump[2019]: Process 1973 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 12:46:24.036827 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 3 12:46:24.051409 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Mar 3 12:46:24.057092 extend-filesystems[2021]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 12:46:24.061824 systemd[1]: Started systemd-coredump@0-2019-0.service - Process Core Dump (PID 2019/UID 0). Mar 3 12:46:24.079800 (ntainerd)[2012]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 12:46:24.081500 tar[1992]: linux-arm64/LICENSE Mar 3 12:46:24.081500 tar[1992]: linux-arm64/helm Mar 3 12:46:24.096133 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 3 12:46:24.108483 jq[2017]: true Mar 3 12:46:24.123968 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 3 12:46:24.167817 dbus-daemon[1967]: [system] SELinux support is enabled Mar 3 12:46:24.168138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 12:46:24.178591 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 12:46:24.178653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 12:46:24.184200 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.229 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.235 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.235 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.238 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.238 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.241 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.245 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.249 INFO Fetch failed with 404: resource not found Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.251 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.255 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.257 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.265 INFO Fetch successful Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.265 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 3 12:46:24.269756 coreos-metadata[1966]: Mar 03 12:46:24.269 INFO Fetch successful Mar 3 12:46:24.204205 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1859 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:24.276615 update_engine[1984]: I20260303 12:46:24.207525 1984 update_check_scheduler.cc:74] Next update check in 10m32s Mar 3 12:46:24.184274 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 12:46:24.333876 systemd-logind[1982]: Watching system buttons on /dev/input/event0 (Power Button) Mar 3 12:46:24.333929 systemd-logind[1982]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 3 12:46:24.338534 bash[2041]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:24.340026 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 3 12:46:24.341920 systemd-logind[1982]: New seat seat0. Mar 3 12:46:24.359274 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 12:46:24.370290 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 12:46:24.388277 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 3 12:46:24.401069 extend-filesystems[2021]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 3 12:46:24.401069 extend-filesystems[2021]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 3 12:46:24.401069 extend-filesystems[2021]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 3 12:46:24.422081 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Mar 3 12:46:24.468684 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 12:46:24.472360 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 12:46:24.480711 systemd[1]: Started update-engine.service - Update Engine. Mar 3 12:46:24.497015 systemd[1]: Starting sshkeys.service... Mar 3 12:46:24.512650 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 12:46:24.564827 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 3 12:46:24.580108 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 3 12:46:24.595915 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 3 12:46:24.598741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 12:46:24.769166 containerd[2012]: time="2026-03-03T12:46:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 12:46:24.775545 containerd[2012]: time="2026-03-03T12:46:24.772970471Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 12:46:24.929504 containerd[2012]: time="2026-03-03T12:46:24.929445948Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.048µs" Mar 3 12:46:24.931323 containerd[2012]: time="2026-03-03T12:46:24.931270344Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931445556Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931748124Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931778544Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931830660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931948932Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:24.934279 containerd[2012]: time="2026-03-03T12:46:24.931974072Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.946272 containerd[2012]: time="2026-03-03T12:46:24.945452844Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.946272 containerd[2012]: time="2026-03-03T12:46:24.945507456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:24.946272 containerd[2012]: time="2026-03-03T12:46:24.945540216Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:24.946272 containerd[2012]: time="2026-03-03T12:46:24.945563088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 12:46:24.946272 containerd[2012]: time="2026-03-03T12:46:24.945795048Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.949499 containerd[2012]: time="2026-03-03T12:46:24.946667616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.949499 containerd[2012]: time="2026-03-03T12:46:24.946744332Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.949499 containerd[2012]: time="2026-03-03T12:46:24.946772460Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 12:46:24.949751 containerd[2012]: time="2026-03-03T12:46:24.948774084Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 12:46:24.962533 containerd[2012]: time="2026-03-03T12:46:24.957875172Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 12:46:24.962533 containerd[2012]: time="2026-03-03T12:46:24.958079292Z" level=info msg="metadata content store policy set" policy=shared Mar 3 12:46:24.973346 containerd[2012]: time="2026-03-03T12:46:24.973285380Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 12:46:24.973775 containerd[2012]: time="2026-03-03T12:46:24.973669056Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 12:46:24.973775 containerd[2012]: time="2026-03-03T12:46:24.973730532Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 12:46:24.973775 containerd[2012]: time="2026-03-03T12:46:24.973763604Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 12:46:24.973916 containerd[2012]: time="2026-03-03T12:46:24.973794876Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 12:46:24.973916 containerd[2012]: time="2026-03-03T12:46:24.973842408Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 12:46:24.973916 containerd[2012]: time="2026-03-03T12:46:24.973871388Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 12:46:24.973916 containerd[2012]: time="2026-03-03T12:46:24.973901376Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 12:46:24.974097 containerd[2012]: time="2026-03-03T12:46:24.973932732Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 12:46:24.974097 containerd[2012]: time="2026-03-03T12:46:24.973959312Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 12:46:24.974097 containerd[2012]: time="2026-03-03T12:46:24.973986972Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 12:46:24.974097 containerd[2012]: time="2026-03-03T12:46:24.974017476Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 12:46:24.974290 containerd[2012]: time="2026-03-03T12:46:24.974265432Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 12:46:24.974359 containerd[2012]: time="2026-03-03T12:46:24.974303940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 12:46:24.974359 containerd[2012]: time="2026-03-03T12:46:24.974338596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 12:46:24.974438 containerd[2012]: time="2026-03-03T12:46:24.974366556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 12:46:24.974438 containerd[2012]: time="2026-03-03T12:46:24.974393244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 12:46:24.974438 containerd[2012]: time="2026-03-03T12:46:24.974420280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 12:46:24.974558 containerd[2012]: time="2026-03-03T12:46:24.974448228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 12:46:24.974558 containerd[2012]: time="2026-03-03T12:46:24.974483304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 12:46:24.974558 containerd[2012]: time="2026-03-03T12:46:24.974513604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 12:46:24.974558 containerd[2012]: time="2026-03-03T12:46:24.974542692Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 12:46:24.974726 containerd[2012]: time="2026-03-03T12:46:24.974568768Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 12:46:24.979236 containerd[2012]: time="2026-03-03T12:46:24.974930124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 12:46:24.979236 containerd[2012]: time="2026-03-03T12:46:24.974976924Z" level=info msg="Start snapshots syncer" Mar 3 12:46:24.979236 containerd[2012]: time="2026-03-03T12:46:24.975036348Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.975581112Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.975681000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.975795756Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976013820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976055124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976084284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976110624Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976139280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976166256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976194828Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976279392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976309992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976347684Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976399728Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976431084Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976453752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976477992Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976500108Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976524420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976559712Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976761984Z" level=info msg="runtime interface created" Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976780188Z" level=info msg="created NRI interface" Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976801332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976829868Z" level=info msg="Connect containerd service" Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.976871184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 12:46:24.979454 containerd[2012]: time="2026-03-03T12:46:24.978180120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:46:25.076141 coreos-metadata[2093]: Mar 03 12:46:25.074 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:25.080241 coreos-metadata[2093]: Mar 03 12:46:25.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 3 12:46:25.080555 coreos-metadata[2093]: Mar 03 12:46:25.080 INFO Fetch successful Mar 3 12:46:25.080555 coreos-metadata[2093]: Mar 03 12:46:25.080 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 3 12:46:25.086333 coreos-metadata[2093]: Mar 03 12:46:25.084 INFO Fetch successful Mar 3 12:46:25.092867 unknown[2093]: wrote ssh authorized keys file for user: core Mar 3 12:46:25.110514 systemd-networkd[1859]: eth0: Gained IPv6LL Mar 3 12:46:25.124910 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 12:46:25.129392 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 12:46:25.135242 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 3 12:46:25.142791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:25.153370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 12:46:25.172049 systemd-coredump[2020]: Process 1973 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1973: #0 0x0000aaaac8e20b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaac8dcfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaac8dd0240 n/a (ntpd + 0x10240) #3 0x0000aaaac8dcbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaac8dcd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaac8dd5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaac8dc738c n/a (ntpd + 0x738c) #7 0x0000ffffa4df2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffa4df2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaac8dc73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 3 12:46:25.190601 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 12:46:25.190941 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 12:46:25.205261 update-ssh-keys[2166]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:25.211096 systemd[1]: systemd-coredump@0-2019-0.service: Deactivated successfully. Mar 3 12:46:25.228525 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 3 12:46:25.241637 systemd[1]: Finished sshkeys.service. Mar 3 12:46:25.307482 locksmithd[2085]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 12:46:25.325430 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:25.330445 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:25.381316 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 12:46:25.409337 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 3 12:46:25.435523 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 3 12:46:25.445054 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 12:46:25.463062 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2050 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:25.474516 systemd[1]: Starting polkit.service - Authorization Manager... Mar 3 12:46:25.518182 ntpd[2197]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: ---------------------------------------------------- Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: corporation. Support and training for ntp-4 are Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: available at https://www.nwtime.org/support Mar 3 12:46:25.520537 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: ---------------------------------------------------- Mar 3 12:46:25.518336 ntpd[2197]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:25.518355 ntpd[2197]: ---------------------------------------------------- Mar 3 12:46:25.518372 ntpd[2197]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:25.518388 ntpd[2197]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:25.518403 ntpd[2197]: corporation. Support and training for ntp-4 are Mar 3 12:46:25.518419 ntpd[2197]: available at https://www.nwtime.org/support Mar 3 12:46:25.518434 ntpd[2197]: ---------------------------------------------------- Mar 3 12:46:25.525577 ntpd[2197]: proto: precision = 0.096 usec (-23) Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: proto: precision = 0.096 usec (-23) Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: basedate set to 2026-02-19 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen normally on 3 eth0 172.31.20.143:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listen normally on 5 eth0 [fe80::40f:49ff:feb0:71b1%2]:123 Mar 3 12:46:25.532868 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:25.525917 ntpd[2197]: basedate set to 2026-02-19 Mar 3 12:46:25.525937 ntpd[2197]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:25.526056 ntpd[2197]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:25.526099 ntpd[2197]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:25.528465 ntpd[2197]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:25.528519 ntpd[2197]: Listen normally on 3 eth0 172.31.20.143:123 Mar 3 12:46:25.528566 ntpd[2197]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:25.528608 ntpd[2197]: Listen normally on 5 eth0 [fe80::40f:49ff:feb0:71b1%2]:123 Mar 3 12:46:25.528651 ntpd[2197]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:25.576387 amazon-ssm-agent[2169]: Initializing new seelog logger Mar 3 12:46:25.576859 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:25.576859 ntpd[2197]: 3 Mar 12:46:25 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:25.570138 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:25.570189 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:25.579463 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Mar 3 12:46:25.579463 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.579463 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.579463 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 processing appconfig overrides Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584369867Z" level=info msg="Start subscribing containerd event" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584521799Z" level=info msg="Start recovering state" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584673359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584705315Z" level=info msg="Start event monitor" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584731211Z" level=info msg="Start cni network conf syncer for default" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584748347Z" level=info msg="Start streaming server" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584803499Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 12:46:25.584900 containerd[2012]: time="2026-03-03T12:46:25.584825855Z" level=info msg="runtime interface starting up..." Mar 3 12:46:25.587984 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.587984 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.588163 containerd[2012]: time="2026-03-03T12:46:25.584770847Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 12:46:25.590116 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 processing appconfig overrides Mar 3 12:46:25.590745 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.590745 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.590745 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 processing appconfig overrides Mar 3 12:46:25.591754 containerd[2012]: time="2026-03-03T12:46:25.591683507Z" level=info msg="starting plugins..." Mar 3 12:46:25.592192 containerd[2012]: time="2026-03-03T12:46:25.592054187Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 12:46:25.593401 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5876 INFO Proxy environment variables: Mar 3 12:46:25.593142 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 12:46:25.606241 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.606241 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:25.606241 amazon-ssm-agent[2169]: 2026/03/03 12:46:25 processing appconfig overrides Mar 3 12:46:25.606450 containerd[2012]: time="2026-03-03T12:46:25.604169279Z" level=info msg="containerd successfully booted in 0.835893s" Mar 3 12:46:25.696087 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5877 INFO http_proxy: Mar 3 12:46:25.796811 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5877 INFO no_proxy: Mar 3 12:46:25.913076 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5877 INFO https_proxy: Mar 3 12:46:25.926326 polkitd[2203]: Started polkitd version 126 Mar 3 12:46:25.966686 polkitd[2203]: Loading rules from directory /etc/polkit-1/rules.d Mar 3 12:46:25.973118 polkitd[2203]: Loading rules from directory /run/polkit-1/rules.d Mar 3 12:46:25.973350 polkitd[2203]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:25.974000 polkitd[2203]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 3 12:46:25.974064 polkitd[2203]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:25.974146 polkitd[2203]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 3 12:46:25.980062 polkitd[2203]: Finished loading, compiling and executing 2 rules Mar 3 12:46:25.980874 systemd[1]: Started polkit.service - Authorization Manager. Mar 3 12:46:25.986382 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 3 12:46:25.987515 polkitd[2203]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 3 12:46:26.013204 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5901 INFO Checking if agent identity type OnPrem can be assumed Mar 3 12:46:26.048699 systemd-hostnamed[2050]: Hostname set to (transient) Mar 3 12:46:26.049048 systemd-resolved[1860]: System hostname changed to 'ip-172-31-20-143'. Mar 3 12:46:26.111997 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.5902 INFO Checking if agent identity type EC2 can be assumed Mar 3 12:46:26.212228 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8280 INFO Agent will take identity from EC2 Mar 3 12:46:26.310400 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8298 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 3 12:46:26.412159 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8298 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 3 12:46:26.421949 tar[1992]: linux-arm64/README.md Mar 3 12:46:26.464321 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 12:46:26.509537 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8298 INFO [amazon-ssm-agent] Starting Core Agent Mar 3 12:46:26.609875 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8298 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 3 12:46:26.710228 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8298 INFO [Registrar] Starting registrar module Mar 3 12:46:26.810380 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8327 INFO [EC2Identity] Checking disk for registration info Mar 3 12:46:26.912284 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8328 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 3 12:46:26.984004 amazon-ssm-agent[2169]: 2026/03/03 12:46:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:26.984181 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:26.984481 amazon-ssm-agent[2169]: 2026/03/03 12:46:26 processing appconfig overrides Mar 3 12:46:27.012401 amazon-ssm-agent[2169]: 2026-03-03 12:46:25.8328 INFO [EC2Identity] Generating registration keypair Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9383 INFO [EC2Identity] Checking write access before registering Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9390 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9837 INFO [EC2Identity] EC2 registration was successful. Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9837 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9839 INFO [CredentialRefresher] credentialRefresher has started Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:26.9839 INFO [CredentialRefresher] Starting credentials refresher loop Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:27.0139 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 3 12:46:27.014549 amazon-ssm-agent[2169]: 2026-03-03 12:46:27.0142 INFO [CredentialRefresher] Credentials ready Mar 3 12:46:27.113275 amazon-ssm-agent[2169]: 2026-03-03 12:46:27.0144 INFO [CredentialRefresher] Next credential rotation will be in 29.9999920145 minutes Mar 3 12:46:28.047569 sshd_keygen[2010]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 12:46:28.049581 amazon-ssm-agent[2169]: 2026-03-03 12:46:28.0494 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 3 12:46:28.121285 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 12:46:28.129631 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 12:46:28.136783 systemd[1]: Started sshd@0-172.31.20.143:22-20.161.92.111:56778.service - OpenSSH per-connection server daemon (20.161.92.111:56778). Mar 3 12:46:28.151819 amazon-ssm-agent[2169]: 2026-03-03 12:46:28.0551 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2229) started Mar 3 12:46:28.179557 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 12:46:28.181652 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 12:46:28.191897 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 12:46:28.239309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 12:46:28.247008 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 12:46:28.252700 amazon-ssm-agent[2169]: 2026-03-03 12:46:28.0552 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 3 12:46:28.252874 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 12:46:28.256264 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 12:46:28.712965 sshd[2241]: Accepted publickey for core from 20.161.92.111 port 56778 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:28.716561 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:28.730765 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 12:46:28.735560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 12:46:28.757330 systemd-logind[1982]: New session 1 of user core. Mar 3 12:46:28.777832 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 12:46:28.788382 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 12:46:28.810696 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 12:46:28.817529 systemd-logind[1982]: New session c1 of user core. Mar 3 12:46:28.906449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:28.912581 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 12:46:28.932727 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:29.116030 systemd[2260]: Queued start job for default target default.target. Mar 3 12:46:29.130146 systemd[2260]: Created slice app.slice - User Application Slice. Mar 3 12:46:29.130237 systemd[2260]: Reached target paths.target - Paths. Mar 3 12:46:29.130354 systemd[2260]: Reached target timers.target - Timers. Mar 3 12:46:29.135403 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 12:46:29.160002 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 12:46:29.160285 systemd[2260]: Reached target sockets.target - Sockets. Mar 3 12:46:29.160382 systemd[2260]: Reached target basic.target - Basic System. Mar 3 12:46:29.160464 systemd[2260]: Reached target default.target - Main User Target. Mar 3 12:46:29.160523 systemd[2260]: Startup finished in 330ms. Mar 3 12:46:29.161099 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 12:46:29.174520 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 12:46:29.180425 systemd[1]: Startup finished in 3.806s (kernel) + 8.764s (initrd) + 10.794s (userspace) = 23.366s. Mar 3 12:46:29.450892 systemd[1]: Started sshd@1-172.31.20.143:22-20.161.92.111:56792.service - OpenSSH per-connection server daemon (20.161.92.111:56792). Mar 3 12:46:29.968500 sshd[2285]: Accepted publickey for core from 20.161.92.111 port 56792 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:29.971488 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:29.981925 systemd-logind[1982]: New session 2 of user core. Mar 3 12:46:29.988473 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 12:46:30.152578 kubelet[2271]: E0303 12:46:30.152521 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:30.156923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:30.157272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:30.159366 systemd[1]: kubelet.service: Consumed 1.351s CPU time, 249.6M memory peak. Mar 3 12:46:30.212705 sshd[2289]: Connection closed by 20.161.92.111 port 56792 Mar 3 12:46:30.217479 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:30.240699 systemd-logind[1982]: Session 2 logged out. Waiting for processes to exit. Mar 3 12:46:30.242623 systemd[1]: sshd@1-172.31.20.143:22-20.161.92.111:56792.service: Deactivated successfully. Mar 3 12:46:30.246470 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 12:46:30.251312 systemd-logind[1982]: Removed session 2. Mar 3 12:46:30.304136 systemd[1]: Started sshd@2-172.31.20.143:22-20.161.92.111:33884.service - OpenSSH per-connection server daemon (20.161.92.111:33884). Mar 3 12:46:30.762063 sshd[2296]: Accepted publickey for core from 20.161.92.111 port 33884 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:30.763631 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:30.771302 systemd-logind[1982]: New session 3 of user core. Mar 3 12:46:30.778476 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 12:46:30.995592 sshd[2299]: Connection closed by 20.161.92.111 port 33884 Mar 3 12:46:30.995473 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:31.003524 systemd[1]: sshd@2-172.31.20.143:22-20.161.92.111:33884.service: Deactivated successfully. Mar 3 12:46:31.006591 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 12:46:31.010548 systemd-logind[1982]: Session 3 logged out. Waiting for processes to exit. Mar 3 12:46:31.013239 systemd-logind[1982]: Removed session 3. Mar 3 12:46:31.086440 systemd[1]: Started sshd@3-172.31.20.143:22-20.161.92.111:33900.service - OpenSSH per-connection server daemon (20.161.92.111:33900). Mar 3 12:46:31.540362 sshd[2305]: Accepted publickey for core from 20.161.92.111 port 33900 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:31.542731 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:31.553621 systemd-logind[1982]: New session 4 of user core. Mar 3 12:46:31.562441 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 12:46:31.781163 sshd[2308]: Connection closed by 20.161.92.111 port 33900 Mar 3 12:46:31.782037 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:31.789735 systemd-logind[1982]: Session 4 logged out. Waiting for processes to exit. Mar 3 12:46:31.790858 systemd[1]: sshd@3-172.31.20.143:22-20.161.92.111:33900.service: Deactivated successfully. Mar 3 12:46:31.796863 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 12:46:31.800950 systemd-logind[1982]: Removed session 4. Mar 3 12:46:31.876518 systemd[1]: Started sshd@4-172.31.20.143:22-20.161.92.111:33908.service - OpenSSH per-connection server daemon (20.161.92.111:33908). Mar 3 12:46:32.329927 sshd[2314]: Accepted publickey for core from 20.161.92.111 port 33908 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:32.332266 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:32.342304 systemd-logind[1982]: New session 5 of user core. Mar 3 12:46:32.345494 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 12:46:32.507696 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 12:46:32.508307 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:32.677796 systemd-resolved[1860]: Clock change detected. Flushing caches. Mar 3 12:46:32.687892 sudo[2318]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:32.766562 sshd[2317]: Connection closed by 20.161.92.111 port 33908 Mar 3 12:46:32.767402 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:32.775884 systemd[1]: sshd@4-172.31.20.143:22-20.161.92.111:33908.service: Deactivated successfully. Mar 3 12:46:32.780890 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 12:46:32.783053 systemd-logind[1982]: Session 5 logged out. Waiting for processes to exit. Mar 3 12:46:32.786852 systemd-logind[1982]: Removed session 5. Mar 3 12:46:32.860208 systemd[1]: Started sshd@5-172.31.20.143:22-20.161.92.111:33916.service - OpenSSH per-connection server daemon (20.161.92.111:33916). Mar 3 12:46:33.330431 sshd[2324]: Accepted publickey for core from 20.161.92.111 port 33916 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:33.332899 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:33.340200 systemd-logind[1982]: New session 6 of user core. Mar 3 12:46:33.352996 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 12:46:33.495544 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 12:46:33.496350 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:33.504381 sudo[2329]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:33.513954 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 12:46:33.514538 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:33.531130 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:33.591437 augenrules[2351]: No rules Mar 3 12:46:33.593466 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:33.593924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:33.597574 sudo[2328]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:33.677154 sshd[2327]: Connection closed by 20.161.92.111 port 33916 Mar 3 12:46:33.677122 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:33.683920 systemd-logind[1982]: Session 6 logged out. Waiting for processes to exit. Mar 3 12:46:33.684653 systemd[1]: sshd@5-172.31.20.143:22-20.161.92.111:33916.service: Deactivated successfully. Mar 3 12:46:33.688495 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 12:46:33.694540 systemd-logind[1982]: Removed session 6. Mar 3 12:46:33.781187 systemd[1]: Started sshd@6-172.31.20.143:22-20.161.92.111:33928.service - OpenSSH per-connection server daemon (20.161.92.111:33928). Mar 3 12:46:34.236607 sshd[2360]: Accepted publickey for core from 20.161.92.111 port 33928 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:34.239126 sshd-session[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:34.246854 systemd-logind[1982]: New session 7 of user core. Mar 3 12:46:34.254041 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 12:46:34.401013 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 12:46:34.401614 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:34.933986 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 12:46:34.960259 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 12:46:35.342406 dockerd[2381]: time="2026-03-03T12:46:35.340398721Z" level=info msg="Starting up" Mar 3 12:46:35.344550 dockerd[2381]: time="2026-03-03T12:46:35.344495749Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 12:46:35.363784 dockerd[2381]: time="2026-03-03T12:46:35.363705097Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 12:46:35.459155 dockerd[2381]: time="2026-03-03T12:46:35.459079945Z" level=info msg="Loading containers: start." Mar 3 12:46:35.474821 kernel: Initializing XFRM netlink socket Mar 3 12:46:35.812173 (udev-worker)[2402]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:35.887875 systemd-networkd[1859]: docker0: Link UP Mar 3 12:46:35.901126 dockerd[2381]: time="2026-03-03T12:46:35.901058176Z" level=info msg="Loading containers: done." Mar 3 12:46:35.946462 dockerd[2381]: time="2026-03-03T12:46:35.946345612Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 12:46:35.946680 dockerd[2381]: time="2026-03-03T12:46:35.946520572Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 12:46:35.946680 dockerd[2381]: time="2026-03-03T12:46:35.946665268Z" level=info msg="Initializing buildkit" Mar 3 12:46:35.997532 dockerd[2381]: time="2026-03-03T12:46:35.997477552Z" level=info msg="Completed buildkit initialization" Mar 3 12:46:36.010874 dockerd[2381]: time="2026-03-03T12:46:36.010788204Z" level=info msg="Daemon has completed initialization" Mar 3 12:46:36.011257 dockerd[2381]: time="2026-03-03T12:46:36.011066448Z" level=info msg="API listen on /run/docker.sock" Mar 3 12:46:36.014964 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 12:46:36.393284 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1967909167-merged.mount: Deactivated successfully. Mar 3 12:46:37.503170 containerd[2012]: time="2026-03-03T12:46:37.503096655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 3 12:46:38.200857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838251367.mount: Deactivated successfully. Mar 3 12:46:39.562410 containerd[2012]: time="2026-03-03T12:46:39.562352814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:39.564966 containerd[2012]: time="2026-03-03T12:46:39.564913470Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 3 12:46:39.567443 containerd[2012]: time="2026-03-03T12:46:39.567372918Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:39.574857 containerd[2012]: time="2026-03-03T12:46:39.574097742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:39.575943 containerd[2012]: time="2026-03-03T12:46:39.575884386Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.072726651s" Mar 3 12:46:39.576051 containerd[2012]: time="2026-03-03T12:46:39.575946822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 3 12:46:39.576944 containerd[2012]: time="2026-03-03T12:46:39.576890418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 3 12:46:40.566373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:40.571583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:40.974719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:40.991289 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:41.069400 containerd[2012]: time="2026-03-03T12:46:41.068418929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.072147 containerd[2012]: time="2026-03-03T12:46:41.072086141Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 3 12:46:41.075060 containerd[2012]: time="2026-03-03T12:46:41.074995193Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.078135 kubelet[2661]: E0303 12:46:41.077885 2661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:41.083809 containerd[2012]: time="2026-03-03T12:46:41.083414585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.085151 containerd[2012]: time="2026-03-03T12:46:41.085079753Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.508126683s" Mar 3 12:46:41.085399 containerd[2012]: time="2026-03-03T12:46:41.085371485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 3 12:46:41.087455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:41.088188 containerd[2012]: time="2026-03-03T12:46:41.088138049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 3 12:46:41.088307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:41.089534 systemd[1]: kubelet.service: Consumed 345ms CPU time, 105.6M memory peak. Mar 3 12:46:42.122923 containerd[2012]: time="2026-03-03T12:46:42.122870910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.126168 containerd[2012]: time="2026-03-03T12:46:42.126115326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 3 12:46:42.127050 containerd[2012]: time="2026-03-03T12:46:42.126984666Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.133257 containerd[2012]: time="2026-03-03T12:46:42.132810186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.134471 containerd[2012]: time="2026-03-03T12:46:42.134413662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.046215481s" Mar 3 12:46:42.134549 containerd[2012]: time="2026-03-03T12:46:42.134469282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 3 12:46:42.135245 containerd[2012]: time="2026-03-03T12:46:42.135192474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 3 12:46:43.325849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123006407.mount: Deactivated successfully. Mar 3 12:46:43.715918 containerd[2012]: time="2026-03-03T12:46:43.715195906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.717638 containerd[2012]: time="2026-03-03T12:46:43.717566122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 3 12:46:43.720080 containerd[2012]: time="2026-03-03T12:46:43.720006538Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.725750 containerd[2012]: time="2026-03-03T12:46:43.725316526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.726515 containerd[2012]: time="2026-03-03T12:46:43.726459646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.591210736s" Mar 3 12:46:43.726630 containerd[2012]: time="2026-03-03T12:46:43.726521998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 3 12:46:43.727620 containerd[2012]: time="2026-03-03T12:46:43.727552462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 3 12:46:44.271964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588625219.mount: Deactivated successfully. Mar 3 12:46:45.476787 containerd[2012]: time="2026-03-03T12:46:45.476098451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.479133 containerd[2012]: time="2026-03-03T12:46:45.479089199Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 3 12:46:45.481119 containerd[2012]: time="2026-03-03T12:46:45.481048955Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.489648 containerd[2012]: time="2026-03-03T12:46:45.488698523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.490798 containerd[2012]: time="2026-03-03T12:46:45.490726919Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.763102781s" Mar 3 12:46:45.490897 containerd[2012]: time="2026-03-03T12:46:45.490800995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 3 12:46:45.491372 containerd[2012]: time="2026-03-03T12:46:45.491326259Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 12:46:45.971135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550155851.mount: Deactivated successfully. Mar 3 12:46:45.984793 containerd[2012]: time="2026-03-03T12:46:45.984651134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.987317 containerd[2012]: time="2026-03-03T12:46:45.987269654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 3 12:46:45.989388 containerd[2012]: time="2026-03-03T12:46:45.989327582Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.995256 containerd[2012]: time="2026-03-03T12:46:45.995185430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:45.998219 containerd[2012]: time="2026-03-03T12:46:45.998050082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 505.324359ms" Mar 3 12:46:45.998219 containerd[2012]: time="2026-03-03T12:46:45.998099138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 3 12:46:45.998748 containerd[2012]: time="2026-03-03T12:46:45.998669066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 3 12:46:46.538250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800359427.mount: Deactivated successfully. Mar 3 12:46:47.805805 containerd[2012]: time="2026-03-03T12:46:47.804874107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:47.807065 containerd[2012]: time="2026-03-03T12:46:47.806754603Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 3 12:46:47.809614 containerd[2012]: time="2026-03-03T12:46:47.809555703Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:47.818071 containerd[2012]: time="2026-03-03T12:46:47.817992975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:47.819517 containerd[2012]: time="2026-03-03T12:46:47.819139947Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.820411733s" Mar 3 12:46:47.819517 containerd[2012]: time="2026-03-03T12:46:47.819196131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 3 12:46:51.337845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 12:46:51.344904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:51.691007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:51.705651 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:51.776975 kubelet[2827]: E0303 12:46:51.776916 2827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:51.781749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:51.782248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:51.783362 systemd[1]: kubelet.service: Consumed 289ms CPU time, 106.6M memory peak. Mar 3 12:46:53.469488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:53.469859 systemd[1]: kubelet.service: Consumed 289ms CPU time, 106.6M memory peak. Mar 3 12:46:53.476097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:53.530637 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-7.scope)... Mar 3 12:46:53.530669 systemd[1]: Reloading... Mar 3 12:46:53.776849 zram_generator::config[2889]: No configuration found. Mar 3 12:46:54.241321 systemd[1]: Reloading finished in 709 ms. Mar 3 12:46:54.352827 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 12:46:54.353214 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 12:46:54.354947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:54.355035 systemd[1]: kubelet.service: Consumed 234ms CPU time, 94.9M memory peak. Mar 3 12:46:54.360400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:54.708725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:54.729622 (kubelet)[2950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:46:54.802925 kubelet[2950]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 12:46:54.802925 kubelet[2950]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:46:54.803413 kubelet[2950]: I0303 12:46:54.802994 2950 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 12:46:55.576300 kubelet[2950]: I0303 12:46:55.576174 2950 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 12:46:55.576300 kubelet[2950]: I0303 12:46:55.576237 2950 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:46:55.576300 kubelet[2950]: I0303 12:46:55.576283 2950 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 12:46:55.576300 kubelet[2950]: I0303 12:46:55.576297 2950 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:46:55.577139 kubelet[2950]: I0303 12:46:55.577022 2950 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 12:46:55.593390 kubelet[2950]: E0303 12:46:55.593340 2950 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 12:46:55.595520 kubelet[2950]: I0303 12:46:55.595265 2950 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:46:55.602105 kubelet[2950]: I0303 12:46:55.602057 2950 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:46:55.607486 kubelet[2950]: I0303 12:46:55.607435 2950 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 12:46:55.607996 kubelet[2950]: I0303 12:46:55.607933 2950 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:46:55.608266 kubelet[2950]: I0303 12:46:55.607995 2950 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:46:55.608448 kubelet[2950]: I0303 12:46:55.608269 2950 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 12:46:55.608448 kubelet[2950]: I0303 12:46:55.608291 2950 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 12:46:55.608542 kubelet[2950]: I0303 12:46:55.608457 2950 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 12:46:55.613272 kubelet[2950]: I0303 12:46:55.613218 2950 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:46:55.615613 kubelet[2950]: I0303 12:46:55.615563 2950 kubelet.go:475] "Attempting to sync node with API server" Mar 3 12:46:55.615613 kubelet[2950]: I0303 12:46:55.615603 2950 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:46:55.616555 kubelet[2950]: E0303 12:46:55.616516 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-143&limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 12:46:55.616728 kubelet[2950]: I0303 12:46:55.616636 2950 kubelet.go:387] "Adding apiserver pod source" Mar 3 12:46:55.616888 kubelet[2950]: I0303 12:46:55.616869 2950 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:46:55.619300 kubelet[2950]: E0303 12:46:55.619131 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 12:46:55.619974 kubelet[2950]: I0303 12:46:55.619943 2950 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:46:55.621048 kubelet[2950]: I0303 12:46:55.621019 2950 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:46:55.621249 kubelet[2950]: I0303 12:46:55.621227 2950 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 12:46:55.621406 kubelet[2950]: W0303 12:46:55.621387 2950 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 12:46:55.626688 kubelet[2950]: I0303 12:46:55.625754 2950 server.go:1262] "Started kubelet" Mar 3 12:46:55.630499 kubelet[2950]: I0303 12:46:55.630437 2950 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:46:55.632824 kubelet[2950]: I0303 12:46:55.631963 2950 server.go:310] "Adding debug handlers to kubelet server" Mar 3 12:46:55.632824 kubelet[2950]: I0303 12:46:55.632325 2950 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:46:55.632824 kubelet[2950]: I0303 12:46:55.632421 2950 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 12:46:55.633779 kubelet[2950]: I0303 12:46:55.633725 2950 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:46:55.640906 kubelet[2950]: I0303 12:46:55.640856 2950 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 12:46:55.642843 kubelet[2950]: E0303 12:46:55.640237 2950 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.143:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-143.1899559266d34f11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-143,UID:ip-172-31-20-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-143,},FirstTimestamp:2026-03-03 12:46:55.625711377 +0000 UTC m=+0.890069477,LastTimestamp:2026-03-03 12:46:55.625711377 +0000 UTC m=+0.890069477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-143,}" Mar 3 12:46:55.646371 kubelet[2950]: I0303 12:46:55.645479 2950 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:46:55.655354 kubelet[2950]: E0303 12:46:55.655318 2950 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-143\" not found" Mar 3 12:46:55.655563 kubelet[2950]: I0303 12:46:55.655544 2950 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 12:46:55.656600 kubelet[2950]: I0303 12:46:55.656544 2950 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 12:46:55.657134 kubelet[2950]: I0303 12:46:55.657098 2950 reconciler.go:29] "Reconciler: start to sync state" Mar 3 12:46:55.659911 kubelet[2950]: E0303 12:46:55.659856 2950 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": dial tcp 172.31.20.143:6443: connect: connection refused" interval="200ms" Mar 3 12:46:55.663732 kubelet[2950]: E0303 12:46:55.661079 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 12:46:55.666113 kubelet[2950]: I0303 12:46:55.666078 2950 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:46:55.666269 kubelet[2950]: I0303 12:46:55.666251 2950 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:46:55.666485 kubelet[2950]: I0303 12:46:55.666454 2950 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:46:55.687316 kubelet[2950]: I0303 12:46:55.687248 2950 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 12:46:55.689390 kubelet[2950]: I0303 12:46:55.689329 2950 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 12:46:55.689390 kubelet[2950]: I0303 12:46:55.689377 2950 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 12:46:55.689654 kubelet[2950]: I0303 12:46:55.689425 2950 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 12:46:55.689654 kubelet[2950]: E0303 12:46:55.689497 2950 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:46:55.696594 kubelet[2950]: E0303 12:46:55.696006 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 12:46:55.699929 kubelet[2950]: E0303 12:46:55.699892 2950 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 12:46:55.710917 kubelet[2950]: I0303 12:46:55.710886 2950 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 12:46:55.711129 kubelet[2950]: I0303 12:46:55.711105 2950 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 12:46:55.711239 kubelet[2950]: I0303 12:46:55.711222 2950 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:46:55.716272 kubelet[2950]: I0303 12:46:55.716240 2950 policy_none.go:49] "None policy: Start" Mar 3 12:46:55.716438 kubelet[2950]: I0303 12:46:55.716420 2950 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 12:46:55.716561 kubelet[2950]: I0303 12:46:55.716542 2950 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 12:46:55.720811 kubelet[2950]: I0303 12:46:55.720784 2950 policy_none.go:47] "Start" Mar 3 12:46:55.728533 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 12:46:55.746327 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 12:46:55.754392 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 12:46:55.755913 kubelet[2950]: E0303 12:46:55.755859 2950 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-143\" not found" Mar 3 12:46:55.766348 kubelet[2950]: E0303 12:46:55.766295 2950 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:46:55.766677 kubelet[2950]: I0303 12:46:55.766621 2950 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 12:46:55.766745 kubelet[2950]: I0303 12:46:55.766654 2950 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:46:55.768214 kubelet[2950]: I0303 12:46:55.768159 2950 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 12:46:55.773753 kubelet[2950]: E0303 12:46:55.773717 2950 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:46:55.774049 kubelet[2950]: E0303 12:46:55.774012 2950 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-143\" not found" Mar 3 12:46:55.811501 systemd[1]: Created slice kubepods-burstable-pod49df402d850e6ca3583e80ac98d669a3.slice - libcontainer container kubepods-burstable-pod49df402d850e6ca3583e80ac98d669a3.slice. Mar 3 12:46:55.832312 kubelet[2950]: E0303 12:46:55.832100 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:55.841367 systemd[1]: Created slice kubepods-burstable-pod1f6ac2079bc4d7b6e70f51f60e70a79c.slice - libcontainer container kubepods-burstable-pod1f6ac2079bc4d7b6e70f51f60e70a79c.slice. Mar 3 12:46:55.856529 kubelet[2950]: E0303 12:46:55.856310 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:55.858688 kubelet[2950]: I0303 12:46:55.858541 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:46:55.859228 kubelet[2950]: I0303 12:46:55.858883 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-ca-certs\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:46:55.859345 kubelet[2950]: I0303 12:46:55.859321 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:46:55.859512 kubelet[2950]: I0303 12:46:55.859440 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:46:55.860132 kubelet[2950]: I0303 12:46:55.860074 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6ac2079bc4d7b6e70f51f60e70a79c-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-143\" (UID: \"1f6ac2079bc4d7b6e70f51f60e70a79c\") " pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:46:55.860266 kubelet[2950]: I0303 12:46:55.860239 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:46:55.860464 kubelet[2950]: I0303 12:46:55.860337 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:46:55.860624 kubelet[2950]: I0303 12:46:55.860541 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:46:55.860624 kubelet[2950]: I0303 12:46:55.860586 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:46:55.863539 kubelet[2950]: E0303 12:46:55.863418 2950 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": dial tcp 172.31.20.143:6443: connect: connection refused" interval="400ms" Mar 3 12:46:55.863844 systemd[1]: Created slice kubepods-burstable-poddd8679201c4fcedf0ec8d21af9992e8d.slice - libcontainer container kubepods-burstable-poddd8679201c4fcedf0ec8d21af9992e8d.slice. Mar 3 12:46:55.869835 kubelet[2950]: E0303 12:46:55.869419 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:55.870233 kubelet[2950]: I0303 12:46:55.870204 2950 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:46:55.871505 kubelet[2950]: E0303 12:46:55.871434 2950 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.143:6443/api/v1/nodes\": dial tcp 172.31.20.143:6443: connect: connection refused" node="ip-172-31-20-143" Mar 3 12:46:56.075145 kubelet[2950]: I0303 12:46:56.074754 2950 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:46:56.075294 kubelet[2950]: E0303 12:46:56.075242 2950 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.143:6443/api/v1/nodes\": dial tcp 172.31.20.143:6443: connect: connection refused" node="ip-172-31-20-143" Mar 3 12:46:56.140355 containerd[2012]: time="2026-03-03T12:46:56.139676036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-143,Uid:49df402d850e6ca3583e80ac98d669a3,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:56.162755 containerd[2012]: time="2026-03-03T12:46:56.162656804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-143,Uid:1f6ac2079bc4d7b6e70f51f60e70a79c,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:56.174794 containerd[2012]: time="2026-03-03T12:46:56.174623408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-143,Uid:dd8679201c4fcedf0ec8d21af9992e8d,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:56.217963 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 3 12:46:56.264474 kubelet[2950]: E0303 12:46:56.264396 2950 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": dial tcp 172.31.20.143:6443: connect: connection refused" interval="800ms" Mar 3 12:46:56.477646 kubelet[2950]: I0303 12:46:56.477480 2950 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:46:56.478963 kubelet[2950]: E0303 12:46:56.478849 2950 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.143:6443/api/v1/nodes\": dial tcp 172.31.20.143:6443: connect: connection refused" node="ip-172-31-20-143" Mar 3 12:46:56.658219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362643634.mount: Deactivated successfully. Mar 3 12:46:56.676586 containerd[2012]: time="2026-03-03T12:46:56.676521899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:56.683070 containerd[2012]: time="2026-03-03T12:46:56.682671743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 3 12:46:56.687115 containerd[2012]: time="2026-03-03T12:46:56.687043715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:56.690412 containerd[2012]: time="2026-03-03T12:46:56.689808779Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:56.693564 containerd[2012]: time="2026-03-03T12:46:56.693491207Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:56.695699 containerd[2012]: time="2026-03-03T12:46:56.695637959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 12:46:56.697701 containerd[2012]: time="2026-03-03T12:46:56.697645487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 12:46:56.700130 containerd[2012]: time="2026-03-03T12:46:56.700070579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:56.701586 containerd[2012]: time="2026-03-03T12:46:56.701542415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.598231ms" Mar 3 12:46:56.705802 containerd[2012]: time="2026-03-03T12:46:56.705716531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 539.727891ms" Mar 3 12:46:56.717148 containerd[2012]: time="2026-03-03T12:46:56.717047087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 538.864179ms" Mar 3 12:46:56.740958 containerd[2012]: time="2026-03-03T12:46:56.740727455Z" level=info msg="connecting to shim 0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f" address="unix:///run/containerd/s/756a1dbf197157e924139a4fac44bf526f6378e359b6ad7ae85e90ae098b9b24" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:56.765245 containerd[2012]: time="2026-03-03T12:46:56.764938739Z" level=info msg="connecting to shim 08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996" address="unix:///run/containerd/s/ae27671b41427803dc9a537394de2348fb26ae19618f9a8a3ddd29971d6c1669" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:56.800839 containerd[2012]: time="2026-03-03T12:46:56.799997363Z" level=info msg="connecting to shim 8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637" address="unix:///run/containerd/s/ca4f79ba49c204e10bcd50f1bbed2fac08c07da28a568bb70e33e9c4a7c32992" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:56.835565 systemd[1]: Started cri-containerd-08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996.scope - libcontainer container 08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996. Mar 3 12:46:56.854877 systemd[1]: Started cri-containerd-0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f.scope - libcontainer container 0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f. Mar 3 12:46:56.868575 systemd[1]: Started cri-containerd-8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637.scope - libcontainer container 8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637. Mar 3 12:46:56.878212 kubelet[2950]: E0303 12:46:56.878141 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-143&limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 12:46:56.980294 containerd[2012]: time="2026-03-03T12:46:56.980153940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-143,Uid:dd8679201c4fcedf0ec8d21af9992e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996\"" Mar 3 12:46:57.010140 containerd[2012]: time="2026-03-03T12:46:57.009681224Z" level=info msg="CreateContainer within sandbox \"08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 12:46:57.023904 containerd[2012]: time="2026-03-03T12:46:57.023753636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-143,Uid:49df402d850e6ca3583e80ac98d669a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f\"" Mar 3 12:46:57.035185 containerd[2012]: time="2026-03-03T12:46:57.035112476Z" level=info msg="Container 91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:57.038881 containerd[2012]: time="2026-03-03T12:46:57.038471277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-143,Uid:1f6ac2079bc4d7b6e70f51f60e70a79c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637\"" Mar 3 12:46:57.039955 containerd[2012]: time="2026-03-03T12:46:57.039475881Z" level=info msg="CreateContainer within sandbox \"0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 12:46:57.055611 containerd[2012]: time="2026-03-03T12:46:57.054920133Z" level=info msg="CreateContainer within sandbox \"8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 12:46:57.067007 kubelet[2950]: E0303 12:46:57.066739 2950 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": dial tcp 172.31.20.143:6443: connect: connection refused" interval="1.6s" Mar 3 12:46:57.068581 containerd[2012]: time="2026-03-03T12:46:57.068530281Z" level=info msg="CreateContainer within sandbox \"08f3414bf32678d7dcf2c5e7b97d95d75d2771d346ec8a6afb14a25e5eda5996\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5\"" Mar 3 12:46:57.069696 containerd[2012]: time="2026-03-03T12:46:57.069625809Z" level=info msg="StartContainer for \"91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5\"" Mar 3 12:46:57.071880 containerd[2012]: time="2026-03-03T12:46:57.071822073Z" level=info msg="connecting to shim 91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5" address="unix:///run/containerd/s/ae27671b41427803dc9a537394de2348fb26ae19618f9a8a3ddd29971d6c1669" protocol=ttrpc version=3 Mar 3 12:46:57.077367 containerd[2012]: time="2026-03-03T12:46:57.076040865Z" level=info msg="Container 46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:57.096052 containerd[2012]: time="2026-03-03T12:46:57.095988993Z" level=info msg="CreateContainer within sandbox \"0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048\"" Mar 3 12:46:57.097494 containerd[2012]: time="2026-03-03T12:46:57.097443213Z" level=info msg="StartContainer for \"46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048\"" Mar 3 12:46:57.099599 containerd[2012]: time="2026-03-03T12:46:57.099542733Z" level=info msg="connecting to shim 46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048" address="unix:///run/containerd/s/756a1dbf197157e924139a4fac44bf526f6378e359b6ad7ae85e90ae098b9b24" protocol=ttrpc version=3 Mar 3 12:46:57.106352 systemd[1]: Started cri-containerd-91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5.scope - libcontainer container 91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5. Mar 3 12:46:57.107670 containerd[2012]: time="2026-03-03T12:46:57.107607945Z" level=info msg="Container dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:57.135968 containerd[2012]: time="2026-03-03T12:46:57.134695797Z" level=info msg="CreateContainer within sandbox \"8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f\"" Mar 3 12:46:57.136886 containerd[2012]: time="2026-03-03T12:46:57.136842381Z" level=info msg="StartContainer for \"dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f\"" Mar 3 12:46:57.142509 containerd[2012]: time="2026-03-03T12:46:57.142458645Z" level=info msg="connecting to shim dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f" address="unix:///run/containerd/s/ca4f79ba49c204e10bcd50f1bbed2fac08c07da28a568bb70e33e9c4a7c32992" protocol=ttrpc version=3 Mar 3 12:46:57.159077 kubelet[2950]: E0303 12:46:57.158979 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 12:46:57.162957 systemd[1]: Started cri-containerd-46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048.scope - libcontainer container 46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048. Mar 3 12:46:57.199719 kubelet[2950]: E0303 12:46:57.197382 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 12:46:57.203359 systemd[1]: Started cri-containerd-dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f.scope - libcontainer container dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f. Mar 3 12:46:57.209389 kubelet[2950]: E0303 12:46:57.209318 2950 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 12:46:57.280889 containerd[2012]: time="2026-03-03T12:46:57.279809014Z" level=info msg="StartContainer for \"91b1c6fd92367fff70ba75fed6afca973d4cb5fd70516923a001c281a8aa1ac5\" returns successfully" Mar 3 12:46:57.283497 kubelet[2950]: I0303 12:46:57.283330 2950 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:46:57.284053 kubelet[2950]: E0303 12:46:57.283956 2950 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.143:6443/api/v1/nodes\": dial tcp 172.31.20.143:6443: connect: connection refused" node="ip-172-31-20-143" Mar 3 12:46:57.366843 containerd[2012]: time="2026-03-03T12:46:57.366154234Z" level=info msg="StartContainer for \"46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048\" returns successfully" Mar 3 12:46:57.381625 containerd[2012]: time="2026-03-03T12:46:57.381376102Z" level=info msg="StartContainer for \"dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f\" returns successfully" Mar 3 12:46:57.726449 kubelet[2950]: E0303 12:46:57.726395 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:57.734319 kubelet[2950]: E0303 12:46:57.734258 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:57.737707 kubelet[2950]: E0303 12:46:57.737657 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:58.742447 kubelet[2950]: E0303 12:46:58.742395 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:58.745039 kubelet[2950]: E0303 12:46:58.744992 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:46:58.887733 kubelet[2950]: I0303 12:46:58.887678 2950 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:46:59.745188 kubelet[2950]: E0303 12:46:59.745077 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:47:00.520257 kubelet[2950]: E0303 12:47:00.520199 2950 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:47:01.829800 kubelet[2950]: E0303 12:47:01.829102 2950 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-143\" not found" node="ip-172-31-20-143" Mar 3 12:47:01.900791 kubelet[2950]: I0303 12:47:01.899628 2950 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-143" Mar 3 12:47:01.900791 kubelet[2950]: E0303 12:47:01.899684 2950 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-143\": node \"ip-172-31-20-143\" not found" Mar 3 12:47:01.960101 kubelet[2950]: I0303 12:47:01.960040 2950 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:02.013718 kubelet[2950]: E0303 12:47:02.013562 2950 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-143.1899559266d34f11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-143,UID:ip-172-31-20-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-143,},FirstTimestamp:2026-03-03 12:46:55.625711377 +0000 UTC m=+0.890069477,LastTimestamp:2026-03-03 12:46:55.625711377 +0000 UTC m=+0.890069477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-143,}" Mar 3 12:47:02.055102 kubelet[2950]: E0303 12:47:02.055046 2950 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:02.055102 kubelet[2950]: I0303 12:47:02.055093 2950 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:02.062456 kubelet[2950]: E0303 12:47:02.062386 2950 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:02.062456 kubelet[2950]: I0303 12:47:02.062435 2950 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:02.066782 kubelet[2950]: E0303 12:47:02.066709 2950 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:02.626787 kubelet[2950]: I0303 12:47:02.626478 2950 apiserver.go:52] "Watching apiserver" Mar 3 12:47:02.657305 kubelet[2950]: I0303 12:47:02.657236 2950 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 12:47:02.940883 kubelet[2950]: I0303 12:47:02.938405 2950 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:04.213586 systemd[1]: Reload requested from client PID 3237 ('systemctl') (unit session-7.scope)... Mar 3 12:47:04.213612 systemd[1]: Reloading... Mar 3 12:47:04.406822 zram_generator::config[3284]: No configuration found. Mar 3 12:47:04.907133 systemd[1]: Reloading finished in 692 ms. Mar 3 12:47:04.954018 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:04.980329 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 12:47:04.980918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:04.981011 systemd[1]: kubelet.service: Consumed 1.702s CPU time, 121.4M memory peak. Mar 3 12:47:04.987895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:05.382826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:05.398371 (kubelet)[3341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:47:05.501271 kubelet[3341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 12:47:05.501271 kubelet[3341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:05.501752 kubelet[3341]: I0303 12:47:05.501346 3341 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 12:47:05.517657 kubelet[3341]: I0303 12:47:05.517593 3341 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 12:47:05.517657 kubelet[3341]: I0303 12:47:05.517642 3341 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:47:05.517916 kubelet[3341]: I0303 12:47:05.517687 3341 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 12:47:05.517916 kubelet[3341]: I0303 12:47:05.517701 3341 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:47:05.519747 kubelet[3341]: I0303 12:47:05.518141 3341 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 12:47:05.524784 kubelet[3341]: I0303 12:47:05.524440 3341 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 12:47:05.533006 kubelet[3341]: I0303 12:47:05.532340 3341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:47:05.546857 kubelet[3341]: I0303 12:47:05.546524 3341 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:47:05.553982 kubelet[3341]: I0303 12:47:05.553899 3341 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 12:47:05.554430 kubelet[3341]: I0303 12:47:05.554369 3341 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:47:05.555264 kubelet[3341]: I0303 12:47:05.554426 3341 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:47:05.555264 kubelet[3341]: I0303 12:47:05.554695 3341 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 12:47:05.555264 kubelet[3341]: I0303 12:47:05.554716 3341 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 12:47:05.555264 kubelet[3341]: I0303 12:47:05.554783 3341 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 12:47:05.555264 kubelet[3341]: I0303 12:47:05.555171 3341 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:05.555657 kubelet[3341]: I0303 12:47:05.555399 3341 kubelet.go:475] "Attempting to sync node with API server" Mar 3 12:47:05.555657 kubelet[3341]: I0303 12:47:05.555430 3341 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:47:05.555657 kubelet[3341]: I0303 12:47:05.555472 3341 kubelet.go:387] "Adding apiserver pod source" Mar 3 12:47:05.555657 kubelet[3341]: I0303 12:47:05.555495 3341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:47:05.561568 kubelet[3341]: I0303 12:47:05.560923 3341 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:47:05.562982 kubelet[3341]: I0303 12:47:05.562926 3341 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:47:05.563086 kubelet[3341]: I0303 12:47:05.562997 3341 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 12:47:05.578357 kubelet[3341]: I0303 12:47:05.578172 3341 server.go:1262] "Started kubelet" Mar 3 12:47:05.588268 kubelet[3341]: I0303 12:47:05.588159 3341 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:47:05.588427 kubelet[3341]: I0303 12:47:05.588306 3341 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 12:47:05.589098 kubelet[3341]: I0303 12:47:05.588943 3341 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:47:05.591059 kubelet[3341]: I0303 12:47:05.590603 3341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 12:47:05.593503 sudo[3355]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 3 12:47:05.594246 sudo[3355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 3 12:47:05.602719 kubelet[3341]: I0303 12:47:05.602489 3341 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:47:05.604489 kubelet[3341]: I0303 12:47:05.604410 3341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:47:05.612817 kubelet[3341]: I0303 12:47:05.611273 3341 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 12:47:05.612817 kubelet[3341]: E0303 12:47:05.611447 3341 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-143\" not found" Mar 3 12:47:05.615260 kubelet[3341]: I0303 12:47:05.615208 3341 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 12:47:05.615984 kubelet[3341]: I0303 12:47:05.615431 3341 reconciler.go:29] "Reconciler: start to sync state" Mar 3 12:47:05.617285 kubelet[3341]: I0303 12:47:05.616388 3341 server.go:310] "Adding debug handlers to kubelet server" Mar 3 12:47:05.697666 kubelet[3341]: I0303 12:47:05.697388 3341 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:47:05.732309 kubelet[3341]: I0303 12:47:05.729286 3341 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:47:05.732309 kubelet[3341]: I0303 12:47:05.729446 3341 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:47:05.738209 kubelet[3341]: I0303 12:47:05.737395 3341 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 12:47:05.748604 kubelet[3341]: E0303 12:47:05.712852 3341 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-143\" not found" Mar 3 12:47:05.758754 kubelet[3341]: I0303 12:47:05.758422 3341 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 12:47:05.768079 kubelet[3341]: I0303 12:47:05.763676 3341 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 12:47:05.768079 kubelet[3341]: I0303 12:47:05.767363 3341 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 12:47:05.768079 kubelet[3341]: E0303 12:47:05.767449 3341 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:47:05.768079 kubelet[3341]: E0303 12:47:05.759504 3341 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 12:47:05.867613 kubelet[3341]: E0303 12:47:05.867561 3341 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921419 3341 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921451 3341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921489 3341 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921753 3341 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921800 3341 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921832 3341 policy_none.go:49] "None policy: Start" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921851 3341 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 12:47:05.921968 kubelet[3341]: I0303 12:47:05.921872 3341 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 12:47:05.924217 kubelet[3341]: I0303 12:47:05.924179 3341 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 12:47:05.925432 kubelet[3341]: I0303 12:47:05.924379 3341 policy_none.go:47] "Start" Mar 3 12:47:05.949688 kubelet[3341]: E0303 12:47:05.948840 3341 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:47:05.950298 kubelet[3341]: I0303 12:47:05.950122 3341 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 12:47:05.950753 kubelet[3341]: I0303 12:47:05.950450 3341 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:47:05.952995 kubelet[3341]: I0303 12:47:05.951872 3341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 12:47:05.958531 kubelet[3341]: E0303 12:47:05.958492 3341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:47:06.068907 kubelet[3341]: I0303 12:47:06.068674 3341 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:06.069110 kubelet[3341]: I0303 12:47:06.069070 3341 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:06.069311 kubelet[3341]: I0303 12:47:06.068833 3341 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.078153 kubelet[3341]: I0303 12:47:06.078075 3341 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-143" Mar 3 12:47:06.096466 kubelet[3341]: E0303 12:47:06.096358 3341 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-143\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:06.103374 kubelet[3341]: I0303 12:47:06.103207 3341 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-143" Mar 3 12:47:06.103374 kubelet[3341]: I0303 12:47:06.103317 3341 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-143" Mar 3 12:47:06.124210 kubelet[3341]: I0303 12:47:06.124150 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.124210 kubelet[3341]: I0303 12:47:06.124221 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6ac2079bc4d7b6e70f51f60e70a79c-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-143\" (UID: \"1f6ac2079bc4d7b6e70f51f60e70a79c\") " pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:06.124443 kubelet[3341]: I0303 12:47:06.124258 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-ca-certs\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:06.124443 kubelet[3341]: I0303 12:47:06.124308 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:06.124443 kubelet[3341]: I0303 12:47:06.124348 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.124443 kubelet[3341]: I0303 12:47:06.124384 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.124443 kubelet[3341]: I0303 12:47:06.124422 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd8679201c4fcedf0ec8d21af9992e8d-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-143\" (UID: \"dd8679201c4fcedf0ec8d21af9992e8d\") " pod="kube-system/kube-apiserver-ip-172-31-20-143" Mar 3 12:47:06.124685 kubelet[3341]: I0303 12:47:06.124459 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.124685 kubelet[3341]: I0303 12:47:06.124493 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49df402d850e6ca3583e80ac98d669a3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-143\" (UID: \"49df402d850e6ca3583e80ac98d669a3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-143" Mar 3 12:47:06.405292 sudo[3355]: pam_unix(sudo:session): session closed for user root Mar 3 12:47:06.572202 kubelet[3341]: I0303 12:47:06.572138 3341 apiserver.go:52] "Watching apiserver" Mar 3 12:47:06.615659 kubelet[3341]: I0303 12:47:06.615600 3341 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 12:47:06.867109 kubelet[3341]: I0303 12:47:06.865671 3341 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:06.893719 kubelet[3341]: E0303 12:47:06.893485 3341 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-143\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-143" Mar 3 12:47:07.021173 kubelet[3341]: I0303 12:47:07.020984 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-143" podStartSLOduration=1.020962566 podStartE2EDuration="1.020962566s" podCreationTimestamp="2026-03-03 12:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:06.981215914 +0000 UTC m=+1.574258985" watchObservedRunningTime="2026-03-03 12:47:07.020962566 +0000 UTC m=+1.614005613" Mar 3 12:47:07.072731 kubelet[3341]: I0303 12:47:07.072642 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-143" podStartSLOduration=1.07262073 podStartE2EDuration="1.07262073s" podCreationTimestamp="2026-03-03 12:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:07.022516914 +0000 UTC m=+1.615559985" watchObservedRunningTime="2026-03-03 12:47:07.07262073 +0000 UTC m=+1.665663765" Mar 3 12:47:07.122385 kubelet[3341]: I0303 12:47:07.121821 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-143" podStartSLOduration=5.121799515 podStartE2EDuration="5.121799515s" podCreationTimestamp="2026-03-03 12:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:07.074006598 +0000 UTC m=+1.667049669" watchObservedRunningTime="2026-03-03 12:47:07.121799515 +0000 UTC m=+1.714842574" Mar 3 12:47:09.764486 sudo[2364]: pam_unix(sudo:session): session closed for user root Mar 3 12:47:09.843187 sshd[2363]: Connection closed by 20.161.92.111 port 33928 Mar 3 12:47:09.844906 sshd-session[2360]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:09.856722 systemd-logind[1982]: Session 7 logged out. Waiting for processes to exit. Mar 3 12:47:09.859442 systemd[1]: sshd@6-172.31.20.143:22-20.161.92.111:33928.service: Deactivated successfully. Mar 3 12:47:09.868403 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 12:47:09.871270 systemd[1]: session-7.scope: Consumed 10.110s CPU time, 264.2M memory peak. Mar 3 12:47:09.880051 systemd-logind[1982]: Removed session 7. Mar 3 12:47:09.973026 update_engine[1984]: I20260303 12:47:09.972929 1984 update_attempter.cc:509] Updating boot flags... Mar 3 12:47:10.051451 kubelet[3341]: I0303 12:47:10.050782 3341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 12:47:10.053512 kubelet[3341]: I0303 12:47:10.053058 3341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 12:47:10.053578 containerd[2012]: time="2026-03-03T12:47:10.052490181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 12:47:10.840061 systemd[1]: Created slice kubepods-besteffort-pod0d939cfd_765c_4e55_adc9_a8c16498d05d.slice - libcontainer container kubepods-besteffort-pod0d939cfd_765c_4e55_adc9_a8c16498d05d.slice. Mar 3 12:47:10.856533 kubelet[3341]: I0303 12:47:10.856493 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvh9\" (UniqueName: \"kubernetes.io/projected/0d939cfd-765c-4e55-adc9-a8c16498d05d-kube-api-access-mlvh9\") pod \"kube-proxy-nsm8f\" (UID: \"0d939cfd-765c-4e55-adc9-a8c16498d05d\") " pod="kube-system/kube-proxy-nsm8f" Mar 3 12:47:10.857276 kubelet[3341]: I0303 12:47:10.857149 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d939cfd-765c-4e55-adc9-a8c16498d05d-kube-proxy\") pod \"kube-proxy-nsm8f\" (UID: \"0d939cfd-765c-4e55-adc9-a8c16498d05d\") " pod="kube-system/kube-proxy-nsm8f" Mar 3 12:47:10.857580 kubelet[3341]: I0303 12:47:10.857459 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d939cfd-765c-4e55-adc9-a8c16498d05d-xtables-lock\") pod \"kube-proxy-nsm8f\" (UID: \"0d939cfd-765c-4e55-adc9-a8c16498d05d\") " pod="kube-system/kube-proxy-nsm8f" Mar 3 12:47:10.858828 kubelet[3341]: I0303 12:47:10.857705 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d939cfd-765c-4e55-adc9-a8c16498d05d-lib-modules\") pod \"kube-proxy-nsm8f\" (UID: \"0d939cfd-765c-4e55-adc9-a8c16498d05d\") " pod="kube-system/kube-proxy-nsm8f" Mar 3 12:47:10.874681 systemd[1]: Created slice kubepods-burstable-podc125c2c9_e098_4429_b9d7_e102365bf1d2.slice - libcontainer container kubepods-burstable-podc125c2c9_e098_4429_b9d7_e102365bf1d2.slice. Mar 3 12:47:10.959435 kubelet[3341]: I0303 12:47:10.959356 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cni-path\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959435 kubelet[3341]: I0303 12:47:10.959423 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c125c2c9-e098-4429-b9d7-e102365bf1d2-clustermesh-secrets\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959647 kubelet[3341]: I0303 12:47:10.959467 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-etc-cni-netd\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959647 kubelet[3341]: I0303 12:47:10.959508 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-xtables-lock\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959647 kubelet[3341]: I0303 12:47:10.959540 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-hubble-tls\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959647 kubelet[3341]: I0303 12:47:10.959574 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ks2\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-kube-api-access-b5ks2\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959913 kubelet[3341]: I0303 12:47:10.959700 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-run\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.959913 kubelet[3341]: I0303 12:47:10.959736 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-bpf-maps\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.961904 kubelet[3341]: I0303 12:47:10.960465 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-cgroup\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.961904 kubelet[3341]: I0303 12:47:10.960576 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-lib-modules\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.961904 kubelet[3341]: I0303 12:47:10.960618 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-config-path\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.962079 kubelet[3341]: I0303 12:47:10.961907 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-net\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.962079 kubelet[3341]: I0303 12:47:10.961950 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-kernel\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:10.962079 kubelet[3341]: I0303 12:47:10.962031 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-hostproc\") pod \"cilium-vl447\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " pod="kube-system/cilium-vl447" Mar 3 12:47:11.163757 containerd[2012]: time="2026-03-03T12:47:11.161439743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsm8f,Uid:0d939cfd-765c-4e55-adc9-a8c16498d05d,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:11.194340 containerd[2012]: time="2026-03-03T12:47:11.194207063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vl447,Uid:c125c2c9-e098-4429-b9d7-e102365bf1d2,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:11.236103 containerd[2012]: time="2026-03-03T12:47:11.235961471Z" level=info msg="connecting to shim 58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c" address="unix:///run/containerd/s/1a3010d04f68dc344e12500333ca8b3fc677516ce7ea573e3946faffbcf81490" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:11.256892 systemd[1]: Created slice kubepods-besteffort-podfd260d52_0488_4d9c_9d44_4e03cec39bba.slice - libcontainer container kubepods-besteffort-podfd260d52_0488_4d9c_9d44_4e03cec39bba.slice. Mar 3 12:47:11.265402 kubelet[3341]: I0303 12:47:11.265192 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd260d52-0488-4d9c-9d44-4e03cec39bba-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-jj5lw\" (UID: \"fd260d52-0488-4d9c-9d44-4e03cec39bba\") " pod="kube-system/cilium-operator-6f9c7c5859-jj5lw" Mar 3 12:47:11.265402 kubelet[3341]: I0303 12:47:11.265296 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jl4\" (UniqueName: \"kubernetes.io/projected/fd260d52-0488-4d9c-9d44-4e03cec39bba-kube-api-access-46jl4\") pod \"cilium-operator-6f9c7c5859-jj5lw\" (UID: \"fd260d52-0488-4d9c-9d44-4e03cec39bba\") " pod="kube-system/cilium-operator-6f9c7c5859-jj5lw" Mar 3 12:47:11.279922 containerd[2012]: time="2026-03-03T12:47:11.279862823Z" level=info msg="connecting to shim 95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:11.311018 systemd[1]: Started cri-containerd-58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c.scope - libcontainer container 58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c. Mar 3 12:47:11.345106 systemd[1]: Started cri-containerd-95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07.scope - libcontainer container 95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07. Mar 3 12:47:11.422955 containerd[2012]: time="2026-03-03T12:47:11.422734656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsm8f,Uid:0d939cfd-765c-4e55-adc9-a8c16498d05d,Namespace:kube-system,Attempt:0,} returns sandbox id \"58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c\"" Mar 3 12:47:11.436120 containerd[2012]: time="2026-03-03T12:47:11.435900540Z" level=info msg="CreateContainer within sandbox \"58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 12:47:11.446429 containerd[2012]: time="2026-03-03T12:47:11.446290680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vl447,Uid:c125c2c9-e098-4429-b9d7-e102365bf1d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\"" Mar 3 12:47:11.451309 containerd[2012]: time="2026-03-03T12:47:11.451217712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 3 12:47:11.463206 containerd[2012]: time="2026-03-03T12:47:11.463117788Z" level=info msg="Container ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:11.482618 containerd[2012]: time="2026-03-03T12:47:11.482540172Z" level=info msg="CreateContainer within sandbox \"58548b40e59c827488672bbb7450e86031301173f059f73a85c5de396873a21c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02\"" Mar 3 12:47:11.484099 containerd[2012]: time="2026-03-03T12:47:11.484024572Z" level=info msg="StartContainer for \"ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02\"" Mar 3 12:47:11.487845 containerd[2012]: time="2026-03-03T12:47:11.487711932Z" level=info msg="connecting to shim ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02" address="unix:///run/containerd/s/1a3010d04f68dc344e12500333ca8b3fc677516ce7ea573e3946faffbcf81490" protocol=ttrpc version=3 Mar 3 12:47:11.521102 systemd[1]: Started cri-containerd-ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02.scope - libcontainer container ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02. Mar 3 12:47:11.578703 containerd[2012]: time="2026-03-03T12:47:11.578524825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-jj5lw,Uid:fd260d52-0488-4d9c-9d44-4e03cec39bba,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:11.629814 containerd[2012]: time="2026-03-03T12:47:11.629707465Z" level=info msg="connecting to shim 0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30" address="unix:///run/containerd/s/ddf50091418d4946dfdd324350b90f0a26336d9e10653191be12e34df7e97671" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:11.660451 containerd[2012]: time="2026-03-03T12:47:11.660376081Z" level=info msg="StartContainer for \"ba3b3b275b9df95ce23ddc22953c268978f1254efcc4c02bee044304b44e2c02\" returns successfully" Mar 3 12:47:11.701121 systemd[1]: Started cri-containerd-0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30.scope - libcontainer container 0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30. Mar 3 12:47:11.817130 containerd[2012]: time="2026-03-03T12:47:11.817050554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-jj5lw,Uid:fd260d52-0488-4d9c-9d44-4e03cec39bba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\"" Mar 3 12:47:11.985303 kubelet[3341]: I0303 12:47:11.985062 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nsm8f" podStartSLOduration=1.984993663 podStartE2EDuration="1.984993663s" podCreationTimestamp="2026-03-03 12:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:11.923396738 +0000 UTC m=+6.516439785" watchObservedRunningTime="2026-03-03 12:47:11.984993663 +0000 UTC m=+6.578036734" Mar 3 12:47:18.119829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529197854.mount: Deactivated successfully. Mar 3 12:47:20.912675 containerd[2012]: time="2026-03-03T12:47:20.912583187Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:20.917299 containerd[2012]: time="2026-03-03T12:47:20.916858667Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 3 12:47:20.917974 containerd[2012]: time="2026-03-03T12:47:20.917915987Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:20.923738 containerd[2012]: time="2026-03-03T12:47:20.923660111Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.472140431s" Mar 3 12:47:20.923738 containerd[2012]: time="2026-03-03T12:47:20.923730191Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 3 12:47:20.928669 containerd[2012]: time="2026-03-03T12:47:20.928582631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 3 12:47:20.934828 containerd[2012]: time="2026-03-03T12:47:20.934450919Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 12:47:20.956800 containerd[2012]: time="2026-03-03T12:47:20.953721323Z" level=info msg="Container c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:20.976391 containerd[2012]: time="2026-03-03T12:47:20.976326107Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\"" Mar 3 12:47:20.977955 containerd[2012]: time="2026-03-03T12:47:20.977837123Z" level=info msg="StartContainer for \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\"" Mar 3 12:47:20.981009 containerd[2012]: time="2026-03-03T12:47:20.980948675Z" level=info msg="connecting to shim c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" protocol=ttrpc version=3 Mar 3 12:47:21.020085 systemd[1]: Started cri-containerd-c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6.scope - libcontainer container c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6. Mar 3 12:47:21.099518 containerd[2012]: time="2026-03-03T12:47:21.099462392Z" level=info msg="StartContainer for \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" returns successfully" Mar 3 12:47:21.126359 systemd[1]: cri-containerd-c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6.scope: Deactivated successfully. Mar 3 12:47:21.135484 containerd[2012]: time="2026-03-03T12:47:21.135411224Z" level=info msg="received container exit event container_id:\"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" id:\"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" pid:3948 exited_at:{seconds:1772542041 nanos:133542560}" Mar 3 12:47:21.194359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6-rootfs.mount: Deactivated successfully. Mar 3 12:47:21.962477 containerd[2012]: time="2026-03-03T12:47:21.962368392Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 12:47:22.005066 containerd[2012]: time="2026-03-03T12:47:22.004905417Z" level=info msg="Container 58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:22.013319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410755667.mount: Deactivated successfully. Mar 3 12:47:22.034139 containerd[2012]: time="2026-03-03T12:47:22.033949509Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\"" Mar 3 12:47:22.038421 containerd[2012]: time="2026-03-03T12:47:22.038078817Z" level=info msg="StartContainer for \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\"" Mar 3 12:47:22.041182 containerd[2012]: time="2026-03-03T12:47:22.040936581Z" level=info msg="connecting to shim 58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" protocol=ttrpc version=3 Mar 3 12:47:22.081121 systemd[1]: Started cri-containerd-58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a.scope - libcontainer container 58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a. Mar 3 12:47:22.148386 containerd[2012]: time="2026-03-03T12:47:22.148320117Z" level=info msg="StartContainer for \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" returns successfully" Mar 3 12:47:22.177029 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 12:47:22.177588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:47:22.181736 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:47:22.187338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:47:22.194154 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 12:47:22.201914 systemd[1]: cri-containerd-58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a.scope: Deactivated successfully. Mar 3 12:47:22.206755 containerd[2012]: time="2026-03-03T12:47:22.206314702Z" level=info msg="received container exit event container_id:\"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" id:\"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" pid:3993 exited_at:{seconds:1772542042 nanos:204369370}" Mar 3 12:47:22.252339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:47:22.970917 containerd[2012]: time="2026-03-03T12:47:22.970857565Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 12:47:23.000993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a-rootfs.mount: Deactivated successfully. Mar 3 12:47:23.015557 containerd[2012]: time="2026-03-03T12:47:23.015485542Z" level=info msg="Container 83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:23.035552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493735206.mount: Deactivated successfully. Mar 3 12:47:23.049584 containerd[2012]: time="2026-03-03T12:47:23.049508650Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\"" Mar 3 12:47:23.050550 containerd[2012]: time="2026-03-03T12:47:23.050439814Z" level=info msg="StartContainer for \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\"" Mar 3 12:47:23.054601 containerd[2012]: time="2026-03-03T12:47:23.054296098Z" level=info msg="connecting to shim 83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" protocol=ttrpc version=3 Mar 3 12:47:23.106069 systemd[1]: Started cri-containerd-83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a.scope - libcontainer container 83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a. Mar 3 12:47:23.251225 containerd[2012]: time="2026-03-03T12:47:23.250541027Z" level=info msg="StartContainer for \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" returns successfully" Mar 3 12:47:23.251044 systemd[1]: cri-containerd-83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a.scope: Deactivated successfully. Mar 3 12:47:23.264151 containerd[2012]: time="2026-03-03T12:47:23.263836139Z" level=info msg="received container exit event container_id:\"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" id:\"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" pid:4050 exited_at:{seconds:1772542043 nanos:263395727}" Mar 3 12:47:23.350806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a-rootfs.mount: Deactivated successfully. Mar 3 12:47:23.805845 containerd[2012]: time="2026-03-03T12:47:23.805582465Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:23.807844 containerd[2012]: time="2026-03-03T12:47:23.807439585Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 3 12:47:23.810048 containerd[2012]: time="2026-03-03T12:47:23.809992297Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:23.812716 containerd[2012]: time="2026-03-03T12:47:23.812654713Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.883993194s" Mar 3 12:47:23.812885 containerd[2012]: time="2026-03-03T12:47:23.812716153Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 3 12:47:23.822626 containerd[2012]: time="2026-03-03T12:47:23.822549314Z" level=info msg="CreateContainer within sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 3 12:47:23.840371 containerd[2012]: time="2026-03-03T12:47:23.840297674Z" level=info msg="Container 52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:23.860971 containerd[2012]: time="2026-03-03T12:47:23.860798198Z" level=info msg="CreateContainer within sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\"" Mar 3 12:47:23.861578 containerd[2012]: time="2026-03-03T12:47:23.861533510Z" level=info msg="StartContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\"" Mar 3 12:47:23.864553 containerd[2012]: time="2026-03-03T12:47:23.864415238Z" level=info msg="connecting to shim 52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816" address="unix:///run/containerd/s/ddf50091418d4946dfdd324350b90f0a26336d9e10653191be12e34df7e97671" protocol=ttrpc version=3 Mar 3 12:47:23.898058 systemd[1]: Started cri-containerd-52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816.scope - libcontainer container 52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816. Mar 3 12:47:24.016646 containerd[2012]: time="2026-03-03T12:47:24.016374803Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 12:47:24.037352 containerd[2012]: time="2026-03-03T12:47:24.036736583Z" level=info msg="StartContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" returns successfully" Mar 3 12:47:24.072000 containerd[2012]: time="2026-03-03T12:47:24.070304747Z" level=info msg="Container 7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:24.100177 containerd[2012]: time="2026-03-03T12:47:24.100057655Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\"" Mar 3 12:47:24.103407 containerd[2012]: time="2026-03-03T12:47:24.102293891Z" level=info msg="StartContainer for \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\"" Mar 3 12:47:24.107117 containerd[2012]: time="2026-03-03T12:47:24.106952507Z" level=info msg="connecting to shim 7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" protocol=ttrpc version=3 Mar 3 12:47:24.172317 systemd[1]: Started cri-containerd-7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea.scope - libcontainer container 7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea. Mar 3 12:47:24.251431 systemd[1]: cri-containerd-7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea.scope: Deactivated successfully. Mar 3 12:47:24.262042 containerd[2012]: time="2026-03-03T12:47:24.261967764Z" level=info msg="received container exit event container_id:\"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" id:\"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" pid:4130 exited_at:{seconds:1772542044 nanos:253162560}" Mar 3 12:47:24.294340 containerd[2012]: time="2026-03-03T12:47:24.294244932Z" level=info msg="StartContainer for \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" returns successfully" Mar 3 12:47:25.001664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea-rootfs.mount: Deactivated successfully. Mar 3 12:47:25.046883 containerd[2012]: time="2026-03-03T12:47:25.046180272Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 12:47:25.095810 containerd[2012]: time="2026-03-03T12:47:25.091398312Z" level=info msg="Container 8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:25.122568 containerd[2012]: time="2026-03-03T12:47:25.122471520Z" level=info msg="CreateContainer within sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\"" Mar 3 12:47:25.124203 containerd[2012]: time="2026-03-03T12:47:25.124131756Z" level=info msg="StartContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\"" Mar 3 12:47:25.128791 containerd[2012]: time="2026-03-03T12:47:25.128606424Z" level=info msg="connecting to shim 8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999" address="unix:///run/containerd/s/b49fc75a4ae6fc469f97c0dd2f30d05785a36f3e50d83b808b845699ddb036d8" protocol=ttrpc version=3 Mar 3 12:47:25.209840 systemd[1]: Started cri-containerd-8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999.scope - libcontainer container 8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999. Mar 3 12:47:25.303911 kubelet[3341]: I0303 12:47:25.303611 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-jj5lw" podStartSLOduration=2.309902714 podStartE2EDuration="14.303583381s" podCreationTimestamp="2026-03-03 12:47:11 +0000 UTC" firstStartedPulling="2026-03-03 12:47:11.820736078 +0000 UTC m=+6.413779125" lastFinishedPulling="2026-03-03 12:47:23.814416757 +0000 UTC m=+18.407459792" observedRunningTime="2026-03-03 12:47:25.180982812 +0000 UTC m=+19.774026315" watchObservedRunningTime="2026-03-03 12:47:25.303583381 +0000 UTC m=+19.896626440" Mar 3 12:47:25.446309 containerd[2012]: time="2026-03-03T12:47:25.446143298Z" level=info msg="StartContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" returns successfully" Mar 3 12:47:25.914807 kubelet[3341]: I0303 12:47:25.913595 3341 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 3 12:47:26.092636 systemd[1]: Created slice kubepods-burstable-podc56b3370_4336_490f_8700_0b817e71b7a4.slice - libcontainer container kubepods-burstable-podc56b3370_4336_490f_8700_0b817e71b7a4.slice. Mar 3 12:47:26.152844 systemd[1]: Created slice kubepods-burstable-pod691fdfdd_4298_40f1_8930_e6ae4b47c7af.slice - libcontainer container kubepods-burstable-pod691fdfdd_4298_40f1_8930_e6ae4b47c7af.slice. Mar 3 12:47:26.194386 kubelet[3341]: I0303 12:47:26.193736 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5njc\" (UniqueName: \"kubernetes.io/projected/c56b3370-4336-490f-8700-0b817e71b7a4-kube-api-access-c5njc\") pod \"coredns-66bc5c9577-sk9nf\" (UID: \"c56b3370-4336-490f-8700-0b817e71b7a4\") " pod="kube-system/coredns-66bc5c9577-sk9nf" Mar 3 12:47:26.194386 kubelet[3341]: I0303 12:47:26.193948 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/691fdfdd-4298-40f1-8930-e6ae4b47c7af-config-volume\") pod \"coredns-66bc5c9577-8zwsw\" (UID: \"691fdfdd-4298-40f1-8930-e6ae4b47c7af\") " pod="kube-system/coredns-66bc5c9577-8zwsw" Mar 3 12:47:26.194386 kubelet[3341]: I0303 12:47:26.193988 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bvr\" (UniqueName: \"kubernetes.io/projected/691fdfdd-4298-40f1-8930-e6ae4b47c7af-kube-api-access-x7bvr\") pod \"coredns-66bc5c9577-8zwsw\" (UID: \"691fdfdd-4298-40f1-8930-e6ae4b47c7af\") " pod="kube-system/coredns-66bc5c9577-8zwsw" Mar 3 12:47:26.194386 kubelet[3341]: I0303 12:47:26.194025 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c56b3370-4336-490f-8700-0b817e71b7a4-config-volume\") pod \"coredns-66bc5c9577-sk9nf\" (UID: \"c56b3370-4336-490f-8700-0b817e71b7a4\") " pod="kube-system/coredns-66bc5c9577-sk9nf" Mar 3 12:47:26.235830 kubelet[3341]: I0303 12:47:26.234318 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vl447" podStartSLOduration=6.757389223 podStartE2EDuration="16.234296186s" podCreationTimestamp="2026-03-03 12:47:10 +0000 UTC" firstStartedPulling="2026-03-03 12:47:11.450366348 +0000 UTC m=+6.043409395" lastFinishedPulling="2026-03-03 12:47:20.927273311 +0000 UTC m=+15.520316358" observedRunningTime="2026-03-03 12:47:26.231370922 +0000 UTC m=+20.824413981" watchObservedRunningTime="2026-03-03 12:47:26.234296186 +0000 UTC m=+20.827339233" Mar 3 12:47:26.431291 containerd[2012]: time="2026-03-03T12:47:26.431213702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sk9nf,Uid:c56b3370-4336-490f-8700-0b817e71b7a4,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:26.476942 containerd[2012]: time="2026-03-03T12:47:26.476215299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zwsw,Uid:691fdfdd-4298-40f1-8930-e6ae4b47c7af,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:29.337324 (udev-worker)[4264]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:29.337741 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:29.338018 systemd-networkd[1859]: cilium_host: Link UP Mar 3 12:47:29.338711 systemd-networkd[1859]: cilium_net: Link UP Mar 3 12:47:29.340414 systemd-networkd[1859]: cilium_net: Gained carrier Mar 3 12:47:29.342414 systemd-networkd[1859]: cilium_host: Gained carrier Mar 3 12:47:29.437549 systemd-networkd[1859]: cilium_net: Gained IPv6LL Mar 3 12:47:29.537987 (udev-worker)[4315]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:29.550459 systemd-networkd[1859]: cilium_vxlan: Link UP Mar 3 12:47:29.550482 systemd-networkd[1859]: cilium_vxlan: Gained carrier Mar 3 12:47:29.773857 systemd-networkd[1859]: cilium_host: Gained IPv6LL Mar 3 12:47:30.249843 kernel: NET: Registered PF_ALG protocol family Mar 3 12:47:30.677084 systemd-networkd[1859]: cilium_vxlan: Gained IPv6LL Mar 3 12:47:31.863862 systemd-networkd[1859]: lxc_health: Link UP Mar 3 12:47:31.869662 systemd-networkd[1859]: lxc_health: Gained carrier Mar 3 12:47:32.579099 systemd-networkd[1859]: lxce8fba124f68c: Link UP Mar 3 12:47:32.582845 kernel: eth0: renamed from tmpdadcd Mar 3 12:47:32.585285 systemd-networkd[1859]: lxce8fba124f68c: Gained carrier Mar 3 12:47:32.636432 kernel: eth0: renamed from tmp86e9c Mar 3 12:47:32.635312 systemd-networkd[1859]: lxc2353d1ddfeed: Link UP Mar 3 12:47:32.641370 systemd-networkd[1859]: lxc2353d1ddfeed: Gained carrier Mar 3 12:47:32.642148 (udev-worker)[4643]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:33.044997 systemd-networkd[1859]: lxc_health: Gained IPv6LL Mar 3 12:47:34.197089 systemd-networkd[1859]: lxce8fba124f68c: Gained IPv6LL Mar 3 12:47:34.198223 systemd-networkd[1859]: lxc2353d1ddfeed: Gained IPv6LL Mar 3 12:47:34.444803 kubelet[3341]: I0303 12:47:34.444643 3341 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 12:47:36.677634 ntpd[2197]: Listen normally on 6 cilium_host 192.168.0.130:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 6 cilium_host 192.168.0.130:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 7 cilium_net [fe80::6c1b:3cff:fe34:efab%4]:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 8 cilium_host [fe80::c8f6:21ff:fe0c:f84e%5]:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 9 cilium_vxlan [fe80::883a:56ff:fe8e:ee2c%6]:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 10 lxc_health [fe80::c83f:43ff:fef5:b7aa%8]:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 11 lxce8fba124f68c [fe80::781d:1cff:feae:c297%10]:123 Mar 3 12:47:36.678319 ntpd[2197]: 3 Mar 12:47:36 ntpd[2197]: Listen normally on 12 lxc2353d1ddfeed [fe80::742a:dbff:fe26:b0f8%12]:123 Mar 3 12:47:36.677735 ntpd[2197]: Listen normally on 7 cilium_net [fe80::6c1b:3cff:fe34:efab%4]:123 Mar 3 12:47:36.677839 ntpd[2197]: Listen normally on 8 cilium_host [fe80::c8f6:21ff:fe0c:f84e%5]:123 Mar 3 12:47:36.677892 ntpd[2197]: Listen normally on 9 cilium_vxlan [fe80::883a:56ff:fe8e:ee2c%6]:123 Mar 3 12:47:36.677939 ntpd[2197]: Listen normally on 10 lxc_health [fe80::c83f:43ff:fef5:b7aa%8]:123 Mar 3 12:47:36.677985 ntpd[2197]: Listen normally on 11 lxce8fba124f68c [fe80::781d:1cff:feae:c297%10]:123 Mar 3 12:47:36.678030 ntpd[2197]: Listen normally on 12 lxc2353d1ddfeed [fe80::742a:dbff:fe26:b0f8%12]:123 Mar 3 12:47:41.962747 containerd[2012]: time="2026-03-03T12:47:41.962155760Z" level=info msg="connecting to shim dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06" address="unix:///run/containerd/s/e2596d3a78b5bb9422a059b39370b49f1ddff5991ce4f277f6517ec6cf0d9a64" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:42.005148 containerd[2012]: time="2026-03-03T12:47:42.005052088Z" level=info msg="connecting to shim 86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862" address="unix:///run/containerd/s/252665c5f282051d6983b67fbbdd108d1b5f2bbe8fef9f07c3f53bbc23dd1d7c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:42.057101 systemd[1]: Started cri-containerd-dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06.scope - libcontainer container dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06. Mar 3 12:47:42.104356 systemd[1]: Started cri-containerd-86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862.scope - libcontainer container 86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862. Mar 3 12:47:42.218861 containerd[2012]: time="2026-03-03T12:47:42.218657081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zwsw,Uid:691fdfdd-4298-40f1-8930-e6ae4b47c7af,Namespace:kube-system,Attempt:0,} returns sandbox id \"dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06\"" Mar 3 12:47:42.230660 containerd[2012]: time="2026-03-03T12:47:42.230590733Z" level=info msg="CreateContainer within sandbox \"dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:47:42.254112 containerd[2012]: time="2026-03-03T12:47:42.253112597Z" level=info msg="Container ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:42.273024 containerd[2012]: time="2026-03-03T12:47:42.272703653Z" level=info msg="CreateContainer within sandbox \"dadcd80298b45c8a7f0361dd4c5ae4cfbc1a879db8deeafb36b7ad38a2996a06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196\"" Mar 3 12:47:42.273967 containerd[2012]: time="2026-03-03T12:47:42.273908429Z" level=info msg="StartContainer for \"ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196\"" Mar 3 12:47:42.278930 containerd[2012]: time="2026-03-03T12:47:42.278875577Z" level=info msg="connecting to shim ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196" address="unix:///run/containerd/s/e2596d3a78b5bb9422a059b39370b49f1ddff5991ce4f277f6517ec6cf0d9a64" protocol=ttrpc version=3 Mar 3 12:47:42.298695 containerd[2012]: time="2026-03-03T12:47:42.298627673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sk9nf,Uid:c56b3370-4336-490f-8700-0b817e71b7a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862\"" Mar 3 12:47:42.312560 containerd[2012]: time="2026-03-03T12:47:42.312307949Z" level=info msg="CreateContainer within sandbox \"86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:47:42.326092 systemd[1]: Started cri-containerd-ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196.scope - libcontainer container ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196. Mar 3 12:47:42.341566 containerd[2012]: time="2026-03-03T12:47:42.340859802Z" level=info msg="Container acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:42.355977 containerd[2012]: time="2026-03-03T12:47:42.355900446Z" level=info msg="CreateContainer within sandbox \"86e9c912fd9fbfc8f9001fbc13bbf135711dab38e190a162e2549a6f9291f862\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8\"" Mar 3 12:47:42.358204 containerd[2012]: time="2026-03-03T12:47:42.358138722Z" level=info msg="StartContainer for \"acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8\"" Mar 3 12:47:42.360727 containerd[2012]: time="2026-03-03T12:47:42.360648918Z" level=info msg="connecting to shim acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8" address="unix:///run/containerd/s/252665c5f282051d6983b67fbbdd108d1b5f2bbe8fef9f07c3f53bbc23dd1d7c" protocol=ttrpc version=3 Mar 3 12:47:42.410101 systemd[1]: Started cri-containerd-acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8.scope - libcontainer container acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8. Mar 3 12:47:42.437990 containerd[2012]: time="2026-03-03T12:47:42.437892450Z" level=info msg="StartContainer for \"ad3382c51c6eb65b2a0ea3ec65d84e41c3b4da526fddd6746b5ac9fcc8b61196\" returns successfully" Mar 3 12:47:42.503137 containerd[2012]: time="2026-03-03T12:47:42.502996902Z" level=info msg="StartContainer for \"acde63edddf0bca05bcfbe2a74598811ab5a7932000843c436c2dce6d458d4b8\" returns successfully" Mar 3 12:47:43.144326 kubelet[3341]: I0303 12:47:43.144154 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sk9nf" podStartSLOduration=32.144014862 podStartE2EDuration="32.144014862s" podCreationTimestamp="2026-03-03 12:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:43.140640785 +0000 UTC m=+37.733684060" watchObservedRunningTime="2026-03-03 12:47:43.144014862 +0000 UTC m=+37.737057969" Mar 3 12:47:50.449200 systemd[1]: Started sshd@7-172.31.20.143:22-20.161.92.111:49588.service - OpenSSH per-connection server daemon (20.161.92.111:49588). Mar 3 12:47:50.911885 sshd[4846]: Accepted publickey for core from 20.161.92.111 port 49588 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:47:50.914540 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:47:50.928902 systemd-logind[1982]: New session 8 of user core. Mar 3 12:47:50.937043 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 12:47:51.301520 sshd[4849]: Connection closed by 20.161.92.111 port 49588 Mar 3 12:47:51.303051 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:51.311512 systemd[1]: sshd@7-172.31.20.143:22-20.161.92.111:49588.service: Deactivated successfully. Mar 3 12:47:51.316225 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 12:47:51.318303 systemd-logind[1982]: Session 8 logged out. Waiting for processes to exit. Mar 3 12:47:51.321892 systemd-logind[1982]: Removed session 8. Mar 3 12:47:56.394217 systemd[1]: Started sshd@8-172.31.20.143:22-20.161.92.111:49598.service - OpenSSH per-connection server daemon (20.161.92.111:49598). Mar 3 12:47:56.855678 sshd[4861]: Accepted publickey for core from 20.161.92.111 port 49598 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:47:56.858209 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:47:56.866871 systemd-logind[1982]: New session 9 of user core. Mar 3 12:47:56.873048 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 12:47:57.215799 sshd[4866]: Connection closed by 20.161.92.111 port 49598 Mar 3 12:47:57.214639 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:57.221621 systemd-logind[1982]: Session 9 logged out. Waiting for processes to exit. Mar 3 12:47:57.222104 systemd[1]: sshd@8-172.31.20.143:22-20.161.92.111:49598.service: Deactivated successfully. Mar 3 12:47:57.229194 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 12:47:57.235702 systemd-logind[1982]: Removed session 9. Mar 3 12:48:02.321068 systemd[1]: Started sshd@9-172.31.20.143:22-20.161.92.111:33084.service - OpenSSH per-connection server daemon (20.161.92.111:33084). Mar 3 12:48:02.816890 sshd[4879]: Accepted publickey for core from 20.161.92.111 port 33084 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:02.819135 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:02.828873 systemd-logind[1982]: New session 10 of user core. Mar 3 12:48:02.835075 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 12:48:03.202554 sshd[4882]: Connection closed by 20.161.92.111 port 33084 Mar 3 12:48:03.203393 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:03.212057 systemd-logind[1982]: Session 10 logged out. Waiting for processes to exit. Mar 3 12:48:03.214449 systemd[1]: sshd@9-172.31.20.143:22-20.161.92.111:33084.service: Deactivated successfully. Mar 3 12:48:03.222717 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 12:48:03.226180 systemd-logind[1982]: Removed session 10. Mar 3 12:48:08.294637 systemd[1]: Started sshd@10-172.31.20.143:22-20.161.92.111:33100.service - OpenSSH per-connection server daemon (20.161.92.111:33100). Mar 3 12:48:08.757880 sshd[4897]: Accepted publickey for core from 20.161.92.111 port 33100 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:08.760322 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:08.771831 systemd-logind[1982]: New session 11 of user core. Mar 3 12:48:08.777075 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 12:48:09.125824 sshd[4900]: Connection closed by 20.161.92.111 port 33100 Mar 3 12:48:09.126643 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:09.135508 systemd[1]: sshd@10-172.31.20.143:22-20.161.92.111:33100.service: Deactivated successfully. Mar 3 12:48:09.137815 systemd-logind[1982]: Session 11 logged out. Waiting for processes to exit. Mar 3 12:48:09.146579 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 12:48:09.154781 systemd-logind[1982]: Removed session 11. Mar 3 12:48:14.218467 systemd[1]: Started sshd@11-172.31.20.143:22-20.161.92.111:56710.service - OpenSSH per-connection server daemon (20.161.92.111:56710). Mar 3 12:48:14.680825 sshd[4915]: Accepted publickey for core from 20.161.92.111 port 56710 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:14.682745 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:14.691843 systemd-logind[1982]: New session 12 of user core. Mar 3 12:48:14.700066 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 12:48:15.043194 sshd[4918]: Connection closed by 20.161.92.111 port 56710 Mar 3 12:48:15.044340 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:15.051420 systemd-logind[1982]: Session 12 logged out. Waiting for processes to exit. Mar 3 12:48:15.053108 systemd[1]: sshd@11-172.31.20.143:22-20.161.92.111:56710.service: Deactivated successfully. Mar 3 12:48:15.058562 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 12:48:15.063378 systemd-logind[1982]: Removed session 12. Mar 3 12:48:15.147270 systemd[1]: Started sshd@12-172.31.20.143:22-20.161.92.111:56720.service - OpenSSH per-connection server daemon (20.161.92.111:56720). Mar 3 12:48:15.604092 sshd[4930]: Accepted publickey for core from 20.161.92.111 port 56720 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:15.605818 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:15.613917 systemd-logind[1982]: New session 13 of user core. Mar 3 12:48:15.620021 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 12:48:16.063072 sshd[4933]: Connection closed by 20.161.92.111 port 56720 Mar 3 12:48:16.063392 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:16.070698 systemd[1]: sshd@12-172.31.20.143:22-20.161.92.111:56720.service: Deactivated successfully. Mar 3 12:48:16.077561 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 12:48:16.081570 systemd-logind[1982]: Session 13 logged out. Waiting for processes to exit. Mar 3 12:48:16.084574 systemd-logind[1982]: Removed session 13. Mar 3 12:48:16.173431 systemd[1]: Started sshd@13-172.31.20.143:22-20.161.92.111:56722.service - OpenSSH per-connection server daemon (20.161.92.111:56722). Mar 3 12:48:16.674967 sshd[4943]: Accepted publickey for core from 20.161.92.111 port 56722 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:16.676480 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:16.684469 systemd-logind[1982]: New session 14 of user core. Mar 3 12:48:16.694045 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 12:48:17.071669 sshd[4946]: Connection closed by 20.161.92.111 port 56722 Mar 3 12:48:17.072462 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:17.080082 systemd[1]: sshd@13-172.31.20.143:22-20.161.92.111:56722.service: Deactivated successfully. Mar 3 12:48:17.084301 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 12:48:17.086076 systemd-logind[1982]: Session 14 logged out. Waiting for processes to exit. Mar 3 12:48:17.090084 systemd-logind[1982]: Removed session 14. Mar 3 12:48:22.159629 systemd[1]: Started sshd@14-172.31.20.143:22-20.161.92.111:52112.service - OpenSSH per-connection server daemon (20.161.92.111:52112). Mar 3 12:48:22.622655 sshd[4958]: Accepted publickey for core from 20.161.92.111 port 52112 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:22.625030 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:22.633239 systemd-logind[1982]: New session 15 of user core. Mar 3 12:48:22.641025 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 12:48:22.984733 sshd[4961]: Connection closed by 20.161.92.111 port 52112 Mar 3 12:48:22.985596 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:22.993866 systemd[1]: sshd@14-172.31.20.143:22-20.161.92.111:52112.service: Deactivated successfully. Mar 3 12:48:22.998455 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 12:48:23.000718 systemd-logind[1982]: Session 15 logged out. Waiting for processes to exit. Mar 3 12:48:23.004542 systemd-logind[1982]: Removed session 15. Mar 3 12:48:28.081175 systemd[1]: Started sshd@15-172.31.20.143:22-20.161.92.111:52122.service - OpenSSH per-connection server daemon (20.161.92.111:52122). Mar 3 12:48:28.537993 sshd[4973]: Accepted publickey for core from 20.161.92.111 port 52122 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:28.540733 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:28.548401 systemd-logind[1982]: New session 16 of user core. Mar 3 12:48:28.557052 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 12:48:28.899199 sshd[4976]: Connection closed by 20.161.92.111 port 52122 Mar 3 12:48:28.900154 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:28.912134 systemd[1]: sshd@15-172.31.20.143:22-20.161.92.111:52122.service: Deactivated successfully. Mar 3 12:48:28.920874 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 12:48:28.925491 systemd-logind[1982]: Session 16 logged out. Waiting for processes to exit. Mar 3 12:48:28.930154 systemd-logind[1982]: Removed session 16. Mar 3 12:48:33.993001 systemd[1]: Started sshd@16-172.31.20.143:22-20.161.92.111:35570.service - OpenSSH per-connection server daemon (20.161.92.111:35570). Mar 3 12:48:34.451854 sshd[4991]: Accepted publickey for core from 20.161.92.111 port 35570 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:34.454266 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:34.462101 systemd-logind[1982]: New session 17 of user core. Mar 3 12:48:34.476001 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 12:48:34.812731 sshd[4994]: Connection closed by 20.161.92.111 port 35570 Mar 3 12:48:34.813603 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:34.821748 systemd[1]: sshd@16-172.31.20.143:22-20.161.92.111:35570.service: Deactivated successfully. Mar 3 12:48:34.829015 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 12:48:34.832280 systemd-logind[1982]: Session 17 logged out. Waiting for processes to exit. Mar 3 12:48:34.835607 systemd-logind[1982]: Removed session 17. Mar 3 12:48:34.917016 systemd[1]: Started sshd@17-172.31.20.143:22-20.161.92.111:35578.service - OpenSSH per-connection server daemon (20.161.92.111:35578). Mar 3 12:48:35.379012 sshd[5006]: Accepted publickey for core from 20.161.92.111 port 35578 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:35.381547 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:35.389311 systemd-logind[1982]: New session 18 of user core. Mar 3 12:48:35.400082 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 12:48:35.814000 sshd[5009]: Connection closed by 20.161.92.111 port 35578 Mar 3 12:48:35.816079 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:35.823207 systemd[1]: sshd@17-172.31.20.143:22-20.161.92.111:35578.service: Deactivated successfully. Mar 3 12:48:35.828616 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 12:48:35.830901 systemd-logind[1982]: Session 18 logged out. Waiting for processes to exit. Mar 3 12:48:35.834204 systemd-logind[1982]: Removed session 18. Mar 3 12:48:35.915413 systemd[1]: Started sshd@18-172.31.20.143:22-20.161.92.111:35590.service - OpenSSH per-connection server daemon (20.161.92.111:35590). Mar 3 12:48:36.376857 sshd[5019]: Accepted publickey for core from 20.161.92.111 port 35590 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:36.378464 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:36.387908 systemd-logind[1982]: New session 19 of user core. Mar 3 12:48:36.397051 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 12:48:37.585700 sshd[5022]: Connection closed by 20.161.92.111 port 35590 Mar 3 12:48:37.587040 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:37.595996 systemd-logind[1982]: Session 19 logged out. Waiting for processes to exit. Mar 3 12:48:37.597458 systemd[1]: sshd@18-172.31.20.143:22-20.161.92.111:35590.service: Deactivated successfully. Mar 3 12:48:37.602508 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 12:48:37.606725 systemd-logind[1982]: Removed session 19. Mar 3 12:48:37.691077 systemd[1]: Started sshd@19-172.31.20.143:22-20.161.92.111:35592.service - OpenSSH per-connection server daemon (20.161.92.111:35592). Mar 3 12:48:38.195976 sshd[5038]: Accepted publickey for core from 20.161.92.111 port 35592 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:38.198367 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:38.206445 systemd-logind[1982]: New session 20 of user core. Mar 3 12:48:38.217006 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 12:48:38.804846 sshd[5041]: Connection closed by 20.161.92.111 port 35592 Mar 3 12:48:38.805608 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:38.812235 systemd[1]: sshd@19-172.31.20.143:22-20.161.92.111:35592.service: Deactivated successfully. Mar 3 12:48:38.816909 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 12:48:38.820233 systemd-logind[1982]: Session 20 logged out. Waiting for processes to exit. Mar 3 12:48:38.824305 systemd-logind[1982]: Removed session 20. Mar 3 12:48:38.904536 systemd[1]: Started sshd@20-172.31.20.143:22-20.161.92.111:35606.service - OpenSSH per-connection server daemon (20.161.92.111:35606). Mar 3 12:48:39.399813 sshd[5053]: Accepted publickey for core from 20.161.92.111 port 35606 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:39.401679 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:39.409528 systemd-logind[1982]: New session 21 of user core. Mar 3 12:48:39.430028 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 12:48:39.772123 sshd[5056]: Connection closed by 20.161.92.111 port 35606 Mar 3 12:48:39.773185 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:39.781395 systemd-logind[1982]: Session 21 logged out. Waiting for processes to exit. Mar 3 12:48:39.782593 systemd[1]: sshd@20-172.31.20.143:22-20.161.92.111:35606.service: Deactivated successfully. Mar 3 12:48:39.787438 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 12:48:39.791616 systemd-logind[1982]: Removed session 21. Mar 3 12:48:44.869494 systemd[1]: Started sshd@21-172.31.20.143:22-20.161.92.111:51608.service - OpenSSH per-connection server daemon (20.161.92.111:51608). Mar 3 12:48:45.379801 sshd[5073]: Accepted publickey for core from 20.161.92.111 port 51608 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:45.382399 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:45.391374 systemd-logind[1982]: New session 22 of user core. Mar 3 12:48:45.400051 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 12:48:45.753852 sshd[5076]: Connection closed by 20.161.92.111 port 51608 Mar 3 12:48:45.754956 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:45.762316 systemd-logind[1982]: Session 22 logged out. Waiting for processes to exit. Mar 3 12:48:45.764122 systemd[1]: sshd@21-172.31.20.143:22-20.161.92.111:51608.service: Deactivated successfully. Mar 3 12:48:45.769256 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 12:48:45.775382 systemd-logind[1982]: Removed session 22. Mar 3 12:48:50.853264 systemd[1]: Started sshd@22-172.31.20.143:22-20.161.92.111:37704.service - OpenSSH per-connection server daemon (20.161.92.111:37704). Mar 3 12:48:51.356798 sshd[5088]: Accepted publickey for core from 20.161.92.111 port 37704 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:51.358552 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:51.368856 systemd-logind[1982]: New session 23 of user core. Mar 3 12:48:51.375523 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 12:48:51.730320 sshd[5091]: Connection closed by 20.161.92.111 port 37704 Mar 3 12:48:51.731372 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:51.738528 systemd-logind[1982]: Session 23 logged out. Waiting for processes to exit. Mar 3 12:48:51.740425 systemd[1]: sshd@22-172.31.20.143:22-20.161.92.111:37704.service: Deactivated successfully. Mar 3 12:48:51.746555 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 12:48:51.750943 systemd-logind[1982]: Removed session 23. Mar 3 12:48:56.823231 systemd[1]: Started sshd@23-172.31.20.143:22-20.161.92.111:37712.service - OpenSSH per-connection server daemon (20.161.92.111:37712). Mar 3 12:48:57.288823 sshd[5105]: Accepted publickey for core from 20.161.92.111 port 37712 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:57.290565 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:57.298681 systemd-logind[1982]: New session 24 of user core. Mar 3 12:48:57.310048 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 12:48:57.647735 sshd[5108]: Connection closed by 20.161.92.111 port 37712 Mar 3 12:48:57.648820 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:57.655956 systemd[1]: sshd@23-172.31.20.143:22-20.161.92.111:37712.service: Deactivated successfully. Mar 3 12:48:57.661858 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 12:48:57.663890 systemd-logind[1982]: Session 24 logged out. Waiting for processes to exit. Mar 3 12:48:57.667487 systemd-logind[1982]: Removed session 24. Mar 3 12:48:57.739565 systemd[1]: Started sshd@24-172.31.20.143:22-20.161.92.111:37720.service - OpenSSH per-connection server daemon (20.161.92.111:37720). Mar 3 12:48:58.207437 sshd[5120]: Accepted publickey for core from 20.161.92.111 port 37720 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:58.210043 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:58.219900 systemd-logind[1982]: New session 25 of user core. Mar 3 12:48:58.228070 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 12:49:00.915849 kubelet[3341]: I0303 12:49:00.914711 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8zwsw" podStartSLOduration=109.91469088 podStartE2EDuration="1m49.91469088s" podCreationTimestamp="2026-03-03 12:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:43.19841613 +0000 UTC m=+37.791459249" watchObservedRunningTime="2026-03-03 12:49:00.91469088 +0000 UTC m=+115.507733927" Mar 3 12:49:00.971792 containerd[2012]: time="2026-03-03T12:49:00.971297820Z" level=info msg="StopContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" with timeout 30 (s)" Mar 3 12:49:00.973681 containerd[2012]: time="2026-03-03T12:49:00.973571064Z" level=info msg="Stop container \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" with signal terminated" Mar 3 12:49:01.007558 containerd[2012]: time="2026-03-03T12:49:01.006984584Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:49:01.010546 kubelet[3341]: E0303 12:49:01.010025 3341 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 12:49:01.033982 systemd[1]: cri-containerd-52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816.scope: Deactivated successfully. Mar 3 12:49:01.048172 containerd[2012]: time="2026-03-03T12:49:01.048052028Z" level=info msg="StopContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" with timeout 2 (s)" Mar 3 12:49:01.050083 containerd[2012]: time="2026-03-03T12:49:01.049902524Z" level=info msg="Stop container \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" with signal terminated" Mar 3 12:49:01.053626 containerd[2012]: time="2026-03-03T12:49:01.051243620Z" level=info msg="received container exit event container_id:\"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" id:\"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" pid:4099 exited_at:{seconds:1772542141 nanos:47426708}" Mar 3 12:49:01.096319 systemd-networkd[1859]: lxc_health: Link DOWN Mar 3 12:49:01.096340 systemd-networkd[1859]: lxc_health: Lost carrier Mar 3 12:49:01.122093 systemd[1]: cri-containerd-8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999.scope: Deactivated successfully. Mar 3 12:49:01.125687 containerd[2012]: time="2026-03-03T12:49:01.124631001Z" level=info msg="received container exit event container_id:\"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" id:\"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" pid:4169 exited_at:{seconds:1772542141 nanos:124313253}" Mar 3 12:49:01.124971 systemd[1]: cri-containerd-8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999.scope: Consumed 15.962s CPU time, 124M memory peak, 120K read from disk, 12.9M written to disk. Mar 3 12:49:01.147301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816-rootfs.mount: Deactivated successfully. Mar 3 12:49:01.180420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999-rootfs.mount: Deactivated successfully. Mar 3 12:49:01.186009 containerd[2012]: time="2026-03-03T12:49:01.185859741Z" level=info msg="StopContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" returns successfully" Mar 3 12:49:01.188169 containerd[2012]: time="2026-03-03T12:49:01.187745193Z" level=info msg="StopPodSandbox for \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\"" Mar 3 12:49:01.188169 containerd[2012]: time="2026-03-03T12:49:01.187862673Z" level=info msg="Container to stop \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.202342 systemd[1]: cri-containerd-0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30.scope: Deactivated successfully. Mar 3 12:49:01.205795 containerd[2012]: time="2026-03-03T12:49:01.205648353Z" level=info msg="StopContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" returns successfully" Mar 3 12:49:01.208675 containerd[2012]: time="2026-03-03T12:49:01.208626777Z" level=info msg="StopPodSandbox for \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\"" Mar 3 12:49:01.209697 containerd[2012]: time="2026-03-03T12:49:01.209609145Z" level=info msg="Container to stop \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.209697 containerd[2012]: time="2026-03-03T12:49:01.209685141Z" level=info msg="Container to stop \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.209900 containerd[2012]: time="2026-03-03T12:49:01.209711433Z" level=info msg="Container to stop \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.209900 containerd[2012]: time="2026-03-03T12:49:01.209783253Z" level=info msg="Container to stop \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.209900 containerd[2012]: time="2026-03-03T12:49:01.209808621Z" level=info msg="Container to stop \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:49:01.212689 containerd[2012]: time="2026-03-03T12:49:01.212626437Z" level=info msg="received sandbox exit event container_id:\"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" id:\"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" exit_status:137 exited_at:{seconds:1772542141 nanos:211946301}" monitor_name=podsandbox Mar 3 12:49:01.230871 systemd[1]: cri-containerd-95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07.scope: Deactivated successfully. Mar 3 12:49:01.236467 containerd[2012]: time="2026-03-03T12:49:01.236383185Z" level=info msg="received sandbox exit event container_id:\"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" id:\"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" exit_status:137 exited_at:{seconds:1772542141 nanos:235899825}" monitor_name=podsandbox Mar 3 12:49:01.275740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30-rootfs.mount: Deactivated successfully. Mar 3 12:49:01.288367 containerd[2012]: time="2026-03-03T12:49:01.288095218Z" level=info msg="shim disconnected" id=0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30 namespace=k8s.io Mar 3 12:49:01.289299 containerd[2012]: time="2026-03-03T12:49:01.288272530Z" level=warning msg="cleaning up after shim disconnected" id=0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30 namespace=k8s.io Mar 3 12:49:01.289421 containerd[2012]: time="2026-03-03T12:49:01.289316878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 12:49:01.300386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07-rootfs.mount: Deactivated successfully. Mar 3 12:49:01.306922 containerd[2012]: time="2026-03-03T12:49:01.306849634Z" level=info msg="shim disconnected" id=95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07 namespace=k8s.io Mar 3 12:49:01.307302 containerd[2012]: time="2026-03-03T12:49:01.307230046Z" level=warning msg="cleaning up after shim disconnected" id=95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07 namespace=k8s.io Mar 3 12:49:01.307791 containerd[2012]: time="2026-03-03T12:49:01.307727158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 12:49:01.325113 containerd[2012]: time="2026-03-03T12:49:01.325026634Z" level=info msg="TearDown network for sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" successfully" Mar 3 12:49:01.325113 containerd[2012]: time="2026-03-03T12:49:01.325082998Z" level=info msg="StopPodSandbox for \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" returns successfully" Mar 3 12:49:01.325545 containerd[2012]: time="2026-03-03T12:49:01.325458058Z" level=info msg="received sandbox container exit event sandbox_id:\"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" exit_status:137 exited_at:{seconds:1772542141 nanos:211946301}" monitor_name=criService Mar 3 12:49:01.329196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30-shm.mount: Deactivated successfully. Mar 3 12:49:01.354563 containerd[2012]: time="2026-03-03T12:49:01.353659474Z" level=info msg="received sandbox container exit event sandbox_id:\"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" exit_status:137 exited_at:{seconds:1772542141 nanos:235899825}" monitor_name=criService Mar 3 12:49:01.355551 containerd[2012]: time="2026-03-03T12:49:01.355469290Z" level=info msg="TearDown network for sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" successfully" Mar 3 12:49:01.355697 containerd[2012]: time="2026-03-03T12:49:01.355649290Z" level=info msg="StopPodSandbox for \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" returns successfully" Mar 3 12:49:01.378327 kubelet[3341]: I0303 12:49:01.377543 3341 scope.go:117] "RemoveContainer" containerID="52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816" Mar 3 12:49:01.389915 containerd[2012]: time="2026-03-03T12:49:01.389864614Z" level=info msg="RemoveContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\"" Mar 3 12:49:01.404888 containerd[2012]: time="2026-03-03T12:49:01.404278666Z" level=info msg="RemoveContainer for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" returns successfully" Mar 3 12:49:01.406484 kubelet[3341]: I0303 12:49:01.406449 3341 scope.go:117] "RemoveContainer" containerID="52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816" Mar 3 12:49:01.413937 containerd[2012]: time="2026-03-03T12:49:01.413737618Z" level=error msg="ContainerStatus for \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\": not found" Mar 3 12:49:01.414544 kubelet[3341]: E0303 12:49:01.414388 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\": not found" containerID="52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816" Mar 3 12:49:01.414544 kubelet[3341]: I0303 12:49:01.414445 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816"} err="failed to get container status \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\": rpc error: code = NotFound desc = an error occurred when try to find container \"52b8ded1009d29bc8d89517bc05131a71a254ecfa245af2326430c249d362816\": not found" Mar 3 12:49:01.417385 kubelet[3341]: I0303 12:49:01.417167 3341 scope.go:117] "RemoveContainer" containerID="8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999" Mar 3 12:49:01.421488 containerd[2012]: time="2026-03-03T12:49:01.421404082Z" level=info msg="RemoveContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\"" Mar 3 12:49:01.432374 containerd[2012]: time="2026-03-03T12:49:01.430991098Z" level=info msg="RemoveContainer for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" returns successfully" Mar 3 12:49:01.432820 kubelet[3341]: I0303 12:49:01.432696 3341 scope.go:117] "RemoveContainer" containerID="7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea" Mar 3 12:49:01.440465 containerd[2012]: time="2026-03-03T12:49:01.440387986Z" level=info msg="RemoveContainer for \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\"" Mar 3 12:49:01.454057 containerd[2012]: time="2026-03-03T12:49:01.453988942Z" level=info msg="RemoveContainer for \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" returns successfully" Mar 3 12:49:01.455313 kubelet[3341]: I0303 12:49:01.455267 3341 scope.go:117] "RemoveContainer" containerID="83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a" Mar 3 12:49:01.462374 containerd[2012]: time="2026-03-03T12:49:01.462327419Z" level=info msg="RemoveContainer for \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\"" Mar 3 12:49:01.471432 containerd[2012]: time="2026-03-03T12:49:01.471377303Z" level=info msg="RemoveContainer for \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" returns successfully" Mar 3 12:49:01.472005 kubelet[3341]: I0303 12:49:01.471944 3341 scope.go:117] "RemoveContainer" containerID="58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a" Mar 3 12:49:01.476483 containerd[2012]: time="2026-03-03T12:49:01.475747115Z" level=info msg="RemoveContainer for \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\"" Mar 3 12:49:01.484497 containerd[2012]: time="2026-03-03T12:49:01.484415783Z" level=info msg="RemoveContainer for \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" returns successfully" Mar 3 12:49:01.485040 kubelet[3341]: I0303 12:49:01.484983 3341 scope.go:117] "RemoveContainer" containerID="c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6" Mar 3 12:49:01.486119 kubelet[3341]: I0303 12:49:01.486083 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46jl4\" (UniqueName: \"kubernetes.io/projected/fd260d52-0488-4d9c-9d44-4e03cec39bba-kube-api-access-46jl4\") pod \"fd260d52-0488-4d9c-9d44-4e03cec39bba\" (UID: \"fd260d52-0488-4d9c-9d44-4e03cec39bba\") " Mar 3 12:49:01.486440 kubelet[3341]: I0303 12:49:01.486310 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd260d52-0488-4d9c-9d44-4e03cec39bba-cilium-config-path\") pod \"fd260d52-0488-4d9c-9d44-4e03cec39bba\" (UID: \"fd260d52-0488-4d9c-9d44-4e03cec39bba\") " Mar 3 12:49:01.489691 containerd[2012]: time="2026-03-03T12:49:01.489644483Z" level=info msg="RemoveContainer for \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\"" Mar 3 12:49:01.494847 kubelet[3341]: I0303 12:49:01.493711 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd260d52-0488-4d9c-9d44-4e03cec39bba-kube-api-access-46jl4" (OuterVolumeSpecName: "kube-api-access-46jl4") pod "fd260d52-0488-4d9c-9d44-4e03cec39bba" (UID: "fd260d52-0488-4d9c-9d44-4e03cec39bba"). InnerVolumeSpecName "kube-api-access-46jl4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:49:01.496687 kubelet[3341]: I0303 12:49:01.496631 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd260d52-0488-4d9c-9d44-4e03cec39bba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd260d52-0488-4d9c-9d44-4e03cec39bba" (UID: "fd260d52-0488-4d9c-9d44-4e03cec39bba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:49:01.503230 containerd[2012]: time="2026-03-03T12:49:01.503149979Z" level=info msg="RemoveContainer for \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" returns successfully" Mar 3 12:49:01.503725 kubelet[3341]: I0303 12:49:01.503677 3341 scope.go:117] "RemoveContainer" containerID="8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999" Mar 3 12:49:01.504158 containerd[2012]: time="2026-03-03T12:49:01.504062267Z" level=error msg="ContainerStatus for \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\": not found" Mar 3 12:49:01.504394 kubelet[3341]: E0303 12:49:01.504332 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\": not found" containerID="8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999" Mar 3 12:49:01.504710 kubelet[3341]: I0303 12:49:01.504413 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999"} err="failed to get container status \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d9f9ca17252071d232bf38fb7ac6b34ff438e0f68e847094cc43db066413999\": not found" Mar 3 12:49:01.504710 kubelet[3341]: I0303 12:49:01.504445 3341 scope.go:117] "RemoveContainer" containerID="7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea" Mar 3 12:49:01.504987 containerd[2012]: time="2026-03-03T12:49:01.504791483Z" level=error msg="ContainerStatus for \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\": not found" Mar 3 12:49:01.505478 kubelet[3341]: E0303 12:49:01.505244 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\": not found" containerID="7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea" Mar 3 12:49:01.505478 kubelet[3341]: I0303 12:49:01.505292 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea"} err="failed to get container status \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f2fff4f94bd078b85df29853277e042a8946afafe99e46816d7bdb72654eeea\": not found" Mar 3 12:49:01.505478 kubelet[3341]: I0303 12:49:01.505329 3341 scope.go:117] "RemoveContainer" containerID="83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a" Mar 3 12:49:01.505712 containerd[2012]: time="2026-03-03T12:49:01.505642631Z" level=error msg="ContainerStatus for \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\": not found" Mar 3 12:49:01.505965 kubelet[3341]: E0303 12:49:01.505915 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\": not found" containerID="83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a" Mar 3 12:49:01.506064 kubelet[3341]: I0303 12:49:01.505992 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a"} err="failed to get container status \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\": rpc error: code = NotFound desc = an error occurred when try to find container \"83140a6969d3ae5444b3b39dfa2925b42ee2a4953f71787724100f4b60d6c20a\": not found" Mar 3 12:49:01.506064 kubelet[3341]: I0303 12:49:01.506045 3341 scope.go:117] "RemoveContainer" containerID="58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a" Mar 3 12:49:01.506503 containerd[2012]: time="2026-03-03T12:49:01.506423591Z" level=error msg="ContainerStatus for \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\": not found" Mar 3 12:49:01.506874 kubelet[3341]: E0303 12:49:01.506746 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\": not found" containerID="58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a" Mar 3 12:49:01.506874 kubelet[3341]: I0303 12:49:01.506831 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a"} err="failed to get container status \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\": rpc error: code = NotFound desc = an error occurred when try to find container \"58d2e28804a6dc2fbd664d61365660d09b7e7e638e9553a0edc01c4660c6f93a\": not found" Mar 3 12:49:01.507106 kubelet[3341]: I0303 12:49:01.507030 3341 scope.go:117] "RemoveContainer" containerID="c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6" Mar 3 12:49:01.507674 containerd[2012]: time="2026-03-03T12:49:01.507607007Z" level=error msg="ContainerStatus for \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\": not found" Mar 3 12:49:01.507896 kubelet[3341]: E0303 12:49:01.507861 3341 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\": not found" containerID="c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6" Mar 3 12:49:01.508016 kubelet[3341]: I0303 12:49:01.507904 3341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6"} err="failed to get container status \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c63169ff01d2b75efe9abca814e99192c152026e1ee8ab90fc3b225b658a44d6\": not found" Mar 3 12:49:01.587978 kubelet[3341]: I0303 12:49:01.587747 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-net\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.588275 kubelet[3341]: I0303 12:49:01.587864 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.588275 kubelet[3341]: I0303 12:49:01.587948 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c125c2c9-e098-4429-b9d7-e102365bf1d2-clustermesh-secrets\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.588247 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-kernel\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.588944 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ks2\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-kube-api-access-b5ks2\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.588987 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-bpf-maps\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.589022 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-etc-cni-netd\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.589058 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-hubble-tls\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.589797 kubelet[3341]: I0303 12:49:01.589093 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-cgroup\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589132 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-hostproc\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589166 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-run\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589201 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-config-path\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589232 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-lib-modules\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589264 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-xtables-lock\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590194 kubelet[3341]: I0303 12:49:01.589296 3341 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cni-path\") pod \"c125c2c9-e098-4429-b9d7-e102365bf1d2\" (UID: \"c125c2c9-e098-4429-b9d7-e102365bf1d2\") " Mar 3 12:49:01.590485 kubelet[3341]: I0303 12:49:01.589372 3341 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-net\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.590485 kubelet[3341]: I0303 12:49:01.589397 3341 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46jl4\" (UniqueName: \"kubernetes.io/projected/fd260d52-0488-4d9c-9d44-4e03cec39bba-kube-api-access-46jl4\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.590485 kubelet[3341]: I0303 12:49:01.589419 3341 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd260d52-0488-4d9c-9d44-4e03cec39bba-cilium-config-path\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.590485 kubelet[3341]: I0303 12:49:01.589463 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.590485 kubelet[3341]: I0303 12:49:01.589507 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.591691 kubelet[3341]: I0303 12:49:01.591628 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.591863 kubelet[3341]: I0303 12:49:01.591712 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.591863 kubelet[3341]: I0303 12:49:01.591751 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.593415 kubelet[3341]: I0303 12:49:01.593341 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.593664 kubelet[3341]: I0303 12:49:01.593620 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.593882 kubelet[3341]: I0303 12:49:01.593801 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.593882 kubelet[3341]: I0303 12:49:01.593848 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:49:01.596347 kubelet[3341]: I0303 12:49:01.596281 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c125c2c9-e098-4429-b9d7-e102365bf1d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 12:49:01.598899 kubelet[3341]: I0303 12:49:01.598740 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-kube-api-access-b5ks2" (OuterVolumeSpecName: "kube-api-access-b5ks2") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "kube-api-access-b5ks2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:49:01.601670 kubelet[3341]: I0303 12:49:01.601609 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:49:01.602202 kubelet[3341]: I0303 12:49:01.602152 3341 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c125c2c9-e098-4429-b9d7-e102365bf1d2" (UID: "c125c2c9-e098-4429-b9d7-e102365bf1d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:49:01.687732 systemd[1]: Removed slice kubepods-besteffort-podfd260d52_0488_4d9c_9d44_4e03cec39bba.slice - libcontainer container kubepods-besteffort-podfd260d52_0488_4d9c_9d44_4e03cec39bba.slice. Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690148 3341 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-bpf-maps\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690188 3341 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-etc-cni-netd\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690218 3341 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-hubble-tls\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690238 3341 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-cgroup\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690257 3341 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-hostproc\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690276 3341 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-run\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690295 3341 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c125c2c9-e098-4429-b9d7-e102365bf1d2-cilium-config-path\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.690934 kubelet[3341]: I0303 12:49:01.690315 3341 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-lib-modules\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.693844 kubelet[3341]: I0303 12:49:01.690333 3341 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-xtables-lock\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.693844 kubelet[3341]: I0303 12:49:01.690354 3341 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-cni-path\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.693844 kubelet[3341]: I0303 12:49:01.690373 3341 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c125c2c9-e098-4429-b9d7-e102365bf1d2-clustermesh-secrets\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.693844 kubelet[3341]: I0303 12:49:01.690393 3341 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c125c2c9-e098-4429-b9d7-e102365bf1d2-host-proc-sys-kernel\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.693844 kubelet[3341]: I0303 12:49:01.690415 3341 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5ks2\" (UniqueName: \"kubernetes.io/projected/c125c2c9-e098-4429-b9d7-e102365bf1d2-kube-api-access-b5ks2\") on node \"ip-172-31-20-143\" DevicePath \"\"" Mar 3 12:49:01.774602 kubelet[3341]: I0303 12:49:01.774544 3341 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd260d52-0488-4d9c-9d44-4e03cec39bba" path="/var/lib/kubelet/pods/fd260d52-0488-4d9c-9d44-4e03cec39bba/volumes" Mar 3 12:49:01.785616 systemd[1]: Removed slice kubepods-burstable-podc125c2c9_e098_4429_b9d7_e102365bf1d2.slice - libcontainer container kubepods-burstable-podc125c2c9_e098_4429_b9d7_e102365bf1d2.slice. Mar 3 12:49:01.786180 systemd[1]: kubepods-burstable-podc125c2c9_e098_4429_b9d7_e102365bf1d2.slice: Consumed 16.180s CPU time, 124.4M memory peak, 120K read from disk, 12.9M written to disk. Mar 3 12:49:02.144392 systemd[1]: var-lib-kubelet-pods-fd260d52\x2d0488\x2d4d9c\x2d9d44\x2d4e03cec39bba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d46jl4.mount: Deactivated successfully. Mar 3 12:49:02.144566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07-shm.mount: Deactivated successfully. Mar 3 12:49:02.144700 systemd[1]: var-lib-kubelet-pods-c125c2c9\x2de098\x2d4429\x2db9d7\x2de102365bf1d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5ks2.mount: Deactivated successfully. Mar 3 12:49:02.144854 systemd[1]: var-lib-kubelet-pods-c125c2c9\x2de098\x2d4429\x2db9d7\x2de102365bf1d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 3 12:49:02.144988 systemd[1]: var-lib-kubelet-pods-c125c2c9\x2de098\x2d4429\x2db9d7\x2de102365bf1d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 3 12:49:02.923428 sshd[5123]: Connection closed by 20.161.92.111 port 37720 Mar 3 12:49:02.923298 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:02.930975 systemd[1]: sshd@24-172.31.20.143:22-20.161.92.111:37720.service: Deactivated successfully. Mar 3 12:49:02.936653 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 12:49:02.937482 systemd[1]: session-25.scope: Consumed 1.889s CPU time, 25.6M memory peak. Mar 3 12:49:02.940265 systemd-logind[1982]: Session 25 logged out. Waiting for processes to exit. Mar 3 12:49:02.943549 systemd-logind[1982]: Removed session 25. Mar 3 12:49:03.015788 systemd[1]: Started sshd@25-172.31.20.143:22-20.161.92.111:57792.service - OpenSSH per-connection server daemon (20.161.92.111:57792). Mar 3 12:49:03.488805 sshd[5269]: Accepted publickey for core from 20.161.92.111 port 57792 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:03.491201 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:03.499026 systemd-logind[1982]: New session 26 of user core. Mar 3 12:49:03.508461 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 12:49:03.677505 ntpd[2197]: Deleting 10 lxc_health, [fe80::c83f:43ff:fef5:b7aa%8]:123, stats: received=0, sent=0, dropped=0, active_time=87 secs Mar 3 12:49:03.678620 ntpd[2197]: 3 Mar 12:49:03 ntpd[2197]: Deleting 10 lxc_health, [fe80::c83f:43ff:fef5:b7aa%8]:123, stats: received=0, sent=0, dropped=0, active_time=87 secs Mar 3 12:49:03.780959 kubelet[3341]: I0303 12:49:03.778859 3341 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c125c2c9-e098-4429-b9d7-e102365bf1d2" path="/var/lib/kubelet/pods/c125c2c9-e098-4429-b9d7-e102365bf1d2/volumes" Mar 3 12:49:05.356937 systemd[1]: Created slice kubepods-burstable-podf473f239_767d_4ad9_8055_1ef2a482fd14.slice - libcontainer container kubepods-burstable-podf473f239_767d_4ad9_8055_1ef2a482fd14.slice. Mar 3 12:49:05.361282 sshd[5272]: Connection closed by 20.161.92.111 port 57792 Mar 3 12:49:05.361961 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:05.380374 systemd[1]: sshd@25-172.31.20.143:22-20.161.92.111:57792.service: Deactivated successfully. Mar 3 12:49:05.386488 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 12:49:05.387337 systemd[1]: session-26.scope: Consumed 1.518s CPU time, 23.7M memory peak. Mar 3 12:49:05.391404 systemd-logind[1982]: Session 26 logged out. Waiting for processes to exit. Mar 3 12:49:05.397140 systemd-logind[1982]: Removed session 26. Mar 3 12:49:05.466020 systemd[1]: Started sshd@26-172.31.20.143:22-20.161.92.111:57796.service - OpenSSH per-connection server daemon (20.161.92.111:57796). Mar 3 12:49:05.513329 kubelet[3341]: I0303 12:49:05.513272 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gft4n\" (UniqueName: \"kubernetes.io/projected/f473f239-767d-4ad9-8055-1ef2a482fd14-kube-api-access-gft4n\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514126 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-cni-path\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514188 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-xtables-lock\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514230 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-host-proc-sys-kernel\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514289 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f473f239-767d-4ad9-8055-1ef2a482fd14-cilium-ipsec-secrets\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514326 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-host-proc-sys-net\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515034 kubelet[3341]: I0303 12:49:05.514367 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-etc-cni-netd\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514402 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f473f239-767d-4ad9-8055-1ef2a482fd14-clustermesh-secrets\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514435 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-lib-modules\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514470 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f473f239-767d-4ad9-8055-1ef2a482fd14-cilium-config-path\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514527 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-bpf-maps\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514567 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-hostproc\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515420 kubelet[3341]: I0303 12:49:05.514605 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-cilium-cgroup\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515693 kubelet[3341]: I0303 12:49:05.514640 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f473f239-767d-4ad9-8055-1ef2a482fd14-hubble-tls\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.515693 kubelet[3341]: I0303 12:49:05.514677 3341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f473f239-767d-4ad9-8055-1ef2a482fd14-cilium-run\") pod \"cilium-24sqq\" (UID: \"f473f239-767d-4ad9-8055-1ef2a482fd14\") " pod="kube-system/cilium-24sqq" Mar 3 12:49:05.685980 containerd[2012]: time="2026-03-03T12:49:05.685389615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24sqq,Uid:f473f239-767d-4ad9-8055-1ef2a482fd14,Namespace:kube-system,Attempt:0,}" Mar 3 12:49:05.722674 containerd[2012]: time="2026-03-03T12:49:05.722233492Z" level=info msg="connecting to shim 35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:49:05.772087 systemd[1]: Started cri-containerd-35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19.scope - libcontainer container 35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19. Mar 3 12:49:05.774992 containerd[2012]: time="2026-03-03T12:49:05.774815728Z" level=info msg="StopPodSandbox for \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\"" Mar 3 12:49:05.776520 containerd[2012]: time="2026-03-03T12:49:05.776256664Z" level=info msg="TearDown network for sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" successfully" Mar 3 12:49:05.777447 containerd[2012]: time="2026-03-03T12:49:05.776826256Z" level=info msg="StopPodSandbox for \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" returns successfully" Mar 3 12:49:05.779019 containerd[2012]: time="2026-03-03T12:49:05.778507048Z" level=info msg="RemovePodSandbox for \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\"" Mar 3 12:49:05.779019 containerd[2012]: time="2026-03-03T12:49:05.778568116Z" level=info msg="Forcibly stopping sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\"" Mar 3 12:49:05.779019 containerd[2012]: time="2026-03-03T12:49:05.778712488Z" level=info msg="TearDown network for sandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" successfully" Mar 3 12:49:05.781892 containerd[2012]: time="2026-03-03T12:49:05.781814812Z" level=info msg="Ensure that sandbox 0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30 in task-service has been cleanup successfully" Mar 3 12:49:05.789382 containerd[2012]: time="2026-03-03T12:49:05.789242680Z" level=info msg="RemovePodSandbox \"0ab44fbf508eba8015835114e3ed73debfcfc4598776d602a2683428aab7cd30\" returns successfully" Mar 3 12:49:05.792254 containerd[2012]: time="2026-03-03T12:49:05.791620672Z" level=info msg="StopPodSandbox for \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\"" Mar 3 12:49:05.792254 containerd[2012]: time="2026-03-03T12:49:05.791841796Z" level=info msg="TearDown network for sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" successfully" Mar 3 12:49:05.792254 containerd[2012]: time="2026-03-03T12:49:05.791869408Z" level=info msg="StopPodSandbox for \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" returns successfully" Mar 3 12:49:05.793126 containerd[2012]: time="2026-03-03T12:49:05.793064296Z" level=info msg="RemovePodSandbox for \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\"" Mar 3 12:49:05.793328 containerd[2012]: time="2026-03-03T12:49:05.793299076Z" level=info msg="Forcibly stopping sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\"" Mar 3 12:49:05.793702 containerd[2012]: time="2026-03-03T12:49:05.793627816Z" level=info msg="TearDown network for sandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" successfully" Mar 3 12:49:05.796716 containerd[2012]: time="2026-03-03T12:49:05.796587976Z" level=info msg="Ensure that sandbox 95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07 in task-service has been cleanup successfully" Mar 3 12:49:05.803819 containerd[2012]: time="2026-03-03T12:49:05.803607664Z" level=info msg="RemovePodSandbox \"95b1dbe54b474024317b5576c448b34a3fbc58c534eb6582c6d2e49591ea6c07\" returns successfully" Mar 3 12:49:05.838402 containerd[2012]: time="2026-03-03T12:49:05.838320172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24sqq,Uid:f473f239-767d-4ad9-8055-1ef2a482fd14,Namespace:kube-system,Attempt:0,} returns sandbox id \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\"" Mar 3 12:49:05.850200 containerd[2012]: time="2026-03-03T12:49:05.850143436Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 12:49:05.865417 containerd[2012]: time="2026-03-03T12:49:05.864342364Z" level=info msg="Container b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:05.876168 containerd[2012]: time="2026-03-03T12:49:05.876105376Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e\"" Mar 3 12:49:05.877069 containerd[2012]: time="2026-03-03T12:49:05.877001056Z" level=info msg="StartContainer for \"b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e\"" Mar 3 12:49:05.880306 containerd[2012]: time="2026-03-03T12:49:05.880056052Z" level=info msg="connecting to shim b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" protocol=ttrpc version=3 Mar 3 12:49:05.916192 systemd[1]: Started cri-containerd-b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e.scope - libcontainer container b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e. Mar 3 12:49:05.965968 sshd[5282]: Accepted publickey for core from 20.161.92.111 port 57796 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:05.970540 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:05.986381 systemd-logind[1982]: New session 27 of user core. Mar 3 12:49:05.992199 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 12:49:06.013497 kubelet[3341]: E0303 12:49:06.013439 3341 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 12:49:06.031287 containerd[2012]: time="2026-03-03T12:49:06.031237549Z" level=info msg="StartContainer for \"b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e\" returns successfully" Mar 3 12:49:06.048196 systemd[1]: cri-containerd-b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e.scope: Deactivated successfully. Mar 3 12:49:06.054069 containerd[2012]: time="2026-03-03T12:49:06.053994721Z" level=info msg="received container exit event container_id:\"b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e\" id:\"b1c6692fc315612bc02816426115088b10b3c8f4f08cde9029c2b67213a7409e\" pid:5350 exited_at:{seconds:1772542146 nanos:53474257}" Mar 3 12:49:06.222458 sshd[5359]: Connection closed by 20.161.92.111 port 57796 Mar 3 12:49:06.222137 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:06.230070 systemd-logind[1982]: Session 27 logged out. Waiting for processes to exit. Mar 3 12:49:06.231580 systemd[1]: sshd@26-172.31.20.143:22-20.161.92.111:57796.service: Deactivated successfully. Mar 3 12:49:06.237225 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 12:49:06.241459 systemd-logind[1982]: Removed session 27. Mar 3 12:49:06.324944 systemd[1]: Started sshd@27-172.31.20.143:22-20.161.92.111:57802.service - OpenSSH per-connection server daemon (20.161.92.111:57802). Mar 3 12:49:06.455228 containerd[2012]: time="2026-03-03T12:49:06.455153691Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 12:49:06.475416 containerd[2012]: time="2026-03-03T12:49:06.474899007Z" level=info msg="Container 9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:06.491348 containerd[2012]: time="2026-03-03T12:49:06.491212612Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f\"" Mar 3 12:49:06.494270 containerd[2012]: time="2026-03-03T12:49:06.493746712Z" level=info msg="StartContainer for \"9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f\"" Mar 3 12:49:06.498623 containerd[2012]: time="2026-03-03T12:49:06.498377848Z" level=info msg="connecting to shim 9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" protocol=ttrpc version=3 Mar 3 12:49:06.529381 systemd[1]: Started cri-containerd-9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f.scope - libcontainer container 9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f. Mar 3 12:49:06.594492 containerd[2012]: time="2026-03-03T12:49:06.594425176Z" level=info msg="StartContainer for \"9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f\" returns successfully" Mar 3 12:49:06.612041 systemd[1]: cri-containerd-9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f.scope: Deactivated successfully. Mar 3 12:49:06.614743 containerd[2012]: time="2026-03-03T12:49:06.614637856Z" level=info msg="received container exit event container_id:\"9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f\" id:\"9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f\" pid:5406 exited_at:{seconds:1772542146 nanos:613474636}" Mar 3 12:49:06.683301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9299ae48fdd6be3df61d36ec584eaaff9e34a973cb9bbb4f073225b99362bc4f-rootfs.mount: Deactivated successfully. Mar 3 12:49:06.769126 kubelet[3341]: E0303 12:49:06.768144 3341 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8zwsw" podUID="691fdfdd-4298-40f1-8930-e6ae4b47c7af" Mar 3 12:49:06.822941 sshd[5389]: Accepted publickey for core from 20.161.92.111 port 57802 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:06.824100 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:06.832470 systemd-logind[1982]: New session 28 of user core. Mar 3 12:49:06.842007 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 12:49:07.473375 containerd[2012]: time="2026-03-03T12:49:07.472091848Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 12:49:07.499095 containerd[2012]: time="2026-03-03T12:49:07.499034693Z" level=info msg="Container dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:07.522806 containerd[2012]: time="2026-03-03T12:49:07.522539117Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0\"" Mar 3 12:49:07.524207 containerd[2012]: time="2026-03-03T12:49:07.524129489Z" level=info msg="StartContainer for \"dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0\"" Mar 3 12:49:07.528623 containerd[2012]: time="2026-03-03T12:49:07.528550925Z" level=info msg="connecting to shim dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" protocol=ttrpc version=3 Mar 3 12:49:07.571058 systemd[1]: Started cri-containerd-dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0.scope - libcontainer container dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0. Mar 3 12:49:07.679320 systemd[1]: cri-containerd-dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0.scope: Deactivated successfully. Mar 3 12:49:07.682731 containerd[2012]: time="2026-03-03T12:49:07.682647581Z" level=info msg="StartContainer for \"dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0\" returns successfully" Mar 3 12:49:07.687112 containerd[2012]: time="2026-03-03T12:49:07.686865293Z" level=info msg="received container exit event container_id:\"dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0\" id:\"dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0\" pid:5458 exited_at:{seconds:1772542147 nanos:682690013}" Mar 3 12:49:07.730719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dafe509028985f67fd9fbdaadd9ae2ee687b389b83167c1615417ab53e99d3e0-rootfs.mount: Deactivated successfully. Mar 3 12:49:08.427918 kubelet[3341]: I0303 12:49:08.425354 3341 setters.go:543] "Node became not ready" node="ip-172-31-20-143" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-03T12:49:08Z","lastTransitionTime":"2026-03-03T12:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 3 12:49:08.479791 containerd[2012]: time="2026-03-03T12:49:08.478656557Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 12:49:08.500737 containerd[2012]: time="2026-03-03T12:49:08.500686277Z" level=info msg="Container e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:08.521854 containerd[2012]: time="2026-03-03T12:49:08.521792082Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36\"" Mar 3 12:49:08.522842 containerd[2012]: time="2026-03-03T12:49:08.522753174Z" level=info msg="StartContainer for \"e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36\"" Mar 3 12:49:08.524485 containerd[2012]: time="2026-03-03T12:49:08.524354346Z" level=info msg="connecting to shim e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" protocol=ttrpc version=3 Mar 3 12:49:08.587077 systemd[1]: Started cri-containerd-e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36.scope - libcontainer container e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36. Mar 3 12:49:08.743081 containerd[2012]: time="2026-03-03T12:49:08.741971431Z" level=info msg="StartContainer for \"e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36\" returns successfully" Mar 3 12:49:08.746127 systemd[1]: cri-containerd-e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36.scope: Deactivated successfully. Mar 3 12:49:08.749638 containerd[2012]: time="2026-03-03T12:49:08.749089291Z" level=info msg="received container exit event container_id:\"e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36\" id:\"e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36\" pid:5498 exited_at:{seconds:1772542148 nanos:748640359}" Mar 3 12:49:08.769509 kubelet[3341]: E0303 12:49:08.769087 3341 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8zwsw" podUID="691fdfdd-4298-40f1-8930-e6ae4b47c7af" Mar 3 12:49:08.812422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e776684be2a6afc621926485aa67252fa047154f93dcd687bc08636803877d36-rootfs.mount: Deactivated successfully. Mar 3 12:49:09.496709 containerd[2012]: time="2026-03-03T12:49:09.496659966Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 12:49:09.521434 containerd[2012]: time="2026-03-03T12:49:09.520514743Z" level=info msg="Container 5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:09.543834 containerd[2012]: time="2026-03-03T12:49:09.543750511Z" level=info msg="CreateContainer within sandbox \"35e823292de535dde6024e02fe137a040d67c1ef16fed47e04c977986af3ca19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909\"" Mar 3 12:49:09.545267 containerd[2012]: time="2026-03-03T12:49:09.545008927Z" level=info msg="StartContainer for \"5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909\"" Mar 3 12:49:09.547014 containerd[2012]: time="2026-03-03T12:49:09.546956623Z" level=info msg="connecting to shim 5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909" address="unix:///run/containerd/s/cd489d9f1a7e149c9ba1efe567c3a561ab4776ca7b66f96176c6c09b5fce7a92" protocol=ttrpc version=3 Mar 3 12:49:09.590091 systemd[1]: Started cri-containerd-5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909.scope - libcontainer container 5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909. Mar 3 12:49:09.672074 containerd[2012]: time="2026-03-03T12:49:09.672009799Z" level=info msg="StartContainer for \"5cc5fe9f903d4a6308df02bec47f9afea2abcd01b0ef5b8ac9c3909d26aaf909\" returns successfully" Mar 3 12:49:10.464814 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 3 12:49:10.562792 kubelet[3341]: I0303 12:49:10.562690 3341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24sqq" podStartSLOduration=5.562670372 podStartE2EDuration="5.562670372s" podCreationTimestamp="2026-03-03 12:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:49:10.562366796 +0000 UTC m=+125.155409867" watchObservedRunningTime="2026-03-03 12:49:10.562670372 +0000 UTC m=+125.155713419" Mar 3 12:49:10.768678 kubelet[3341]: E0303 12:49:10.768512 3341 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8zwsw" podUID="691fdfdd-4298-40f1-8930-e6ae4b47c7af" Mar 3 12:49:13.583545 kubelet[3341]: E0303 12:49:13.583477 3341 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45382->127.0.0.1:38535: write tcp 127.0.0.1:45382->127.0.0.1:38535: write: broken pipe Mar 3 12:49:14.704955 systemd-networkd[1859]: lxc_health: Link UP Mar 3 12:49:14.716549 (udev-worker)[6080]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:49:14.728111 systemd-networkd[1859]: lxc_health: Gained carrier Mar 3 12:49:16.533195 systemd-networkd[1859]: lxc_health: Gained IPv6LL Mar 3 12:49:18.677555 ntpd[2197]: Listen normally on 13 lxc_health [fe80::80ee:33ff:feea:ceb7%14]:123 Mar 3 12:49:18.678097 ntpd[2197]: 3 Mar 12:49:18 ntpd[2197]: Listen normally on 13 lxc_health [fe80::80ee:33ff:feea:ceb7%14]:123 Mar 3 12:49:20.619118 sshd[5437]: Connection closed by 20.161.92.111 port 57802 Mar 3 12:49:20.618162 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:20.628061 systemd-logind[1982]: Session 28 logged out. Waiting for processes to exit. Mar 3 12:49:20.630494 systemd[1]: sshd@27-172.31.20.143:22-20.161.92.111:57802.service: Deactivated successfully. Mar 3 12:49:20.637393 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 12:49:20.643537 systemd-logind[1982]: Removed session 28. Mar 3 12:49:35.523206 systemd[1]: cri-containerd-46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048.scope: Deactivated successfully. Mar 3 12:49:35.525972 systemd[1]: cri-containerd-46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048.scope: Consumed 4.653s CPU time, 56.7M memory peak. Mar 3 12:49:35.529822 containerd[2012]: time="2026-03-03T12:49:35.527744228Z" level=info msg="received container exit event container_id:\"46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048\" id:\"46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048\" pid:3172 exit_status:1 exited_at:{seconds:1772542175 nanos:526742240}" Mar 3 12:49:35.576790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048-rootfs.mount: Deactivated successfully. Mar 3 12:49:36.605475 kubelet[3341]: I0303 12:49:36.605430 3341 scope.go:117] "RemoveContainer" containerID="46fbc0ef35a6dca18e4d7dab311145fd05d5766e9ad7737a83ba1b925993e048" Mar 3 12:49:36.610751 containerd[2012]: time="2026-03-03T12:49:36.610172889Z" level=info msg="CreateContainer within sandbox \"0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 12:49:36.627810 containerd[2012]: time="2026-03-03T12:49:36.627002781Z" level=info msg="Container 9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:36.645106 containerd[2012]: time="2026-03-03T12:49:36.645023673Z" level=info msg="CreateContainer within sandbox \"0b07d315fc02d7c93c57a51237843b01a04ad53e10962981679222a00269bd2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a\"" Mar 3 12:49:36.647804 containerd[2012]: time="2026-03-03T12:49:36.646017657Z" level=info msg="StartContainer for \"9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a\"" Mar 3 12:49:36.648370 containerd[2012]: time="2026-03-03T12:49:36.648325137Z" level=info msg="connecting to shim 9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a" address="unix:///run/containerd/s/756a1dbf197157e924139a4fac44bf526f6378e359b6ad7ae85e90ae098b9b24" protocol=ttrpc version=3 Mar 3 12:49:36.692080 systemd[1]: Started cri-containerd-9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a.scope - libcontainer container 9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a. Mar 3 12:49:36.800113 containerd[2012]: time="2026-03-03T12:49:36.800016454Z" level=info msg="StartContainer for \"9a2de284d9bfb755a31a36fcc3709ae74f76c5ec0568b912ac3b543fda0bb19a\" returns successfully" Mar 3 12:49:39.294104 kubelet[3341]: E0303 12:49:39.293981 3341 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": context deadline exceeded" Mar 3 12:49:41.062931 systemd[1]: cri-containerd-dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f.scope: Deactivated successfully. Mar 3 12:49:41.064947 systemd[1]: cri-containerd-dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f.scope: Consumed 5.332s CPU time, 22.9M memory peak. Mar 3 12:49:41.069155 containerd[2012]: time="2026-03-03T12:49:41.068940731Z" level=info msg="received container exit event container_id:\"dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f\" id:\"dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f\" pid:3184 exit_status:1 exited_at:{seconds:1772542181 nanos:68428163}" Mar 3 12:49:41.113234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f-rootfs.mount: Deactivated successfully. Mar 3 12:49:41.631812 kubelet[3341]: I0303 12:49:41.630859 3341 scope.go:117] "RemoveContainer" containerID="dfde34cc4448c87f50bf866e064351fc46157119c15d59f995db2c00db01564f" Mar 3 12:49:41.637798 containerd[2012]: time="2026-03-03T12:49:41.637715510Z" level=info msg="CreateContainer within sandbox \"8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 12:49:41.655622 containerd[2012]: time="2026-03-03T12:49:41.654061406Z" level=info msg="Container 021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:41.675039 containerd[2012]: time="2026-03-03T12:49:41.674928302Z" level=info msg="CreateContainer within sandbox \"8f44db8f69e37912ab021ab208a1a3c8adfbbb49207bdc63cd24ddbfb0c52637\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa\"" Mar 3 12:49:41.675653 containerd[2012]: time="2026-03-03T12:49:41.675600338Z" level=info msg="StartContainer for \"021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa\"" Mar 3 12:49:41.677861 containerd[2012]: time="2026-03-03T12:49:41.677806190Z" level=info msg="connecting to shim 021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa" address="unix:///run/containerd/s/ca4f79ba49c204e10bcd50f1bbed2fac08c07da28a568bb70e33e9c4a7c32992" protocol=ttrpc version=3 Mar 3 12:49:41.722088 systemd[1]: Started cri-containerd-021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa.scope - libcontainer container 021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa. Mar 3 12:49:41.805842 containerd[2012]: time="2026-03-03T12:49:41.805751307Z" level=info msg="StartContainer for \"021ac54c02b2f17d6ecd6f4d3143e3db23cf069ef81bdac2cd441ad5891ae0fa\" returns successfully" Mar 3 12:49:49.294800 kubelet[3341]: E0303 12:49:49.294665 3341 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"