Mar 3 12:46:01.129262 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 3 12:46:01.129306 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Mar 3 11:03:33 -00 2026 Mar 3 12:46:01.129329 kernel: KASLR disabled due to lack of seed Mar 3 12:46:01.129345 kernel: efi: EFI v2.7 by EDK II Mar 3 12:46:01.129361 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Mar 3 12:46:01.129376 kernel: secureboot: Secure boot disabled Mar 3 12:46:01.129394 kernel: ACPI: Early table checksum verification disabled Mar 3 12:46:01.129408 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 3 12:46:01.129423 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 3 12:46:01.129439 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 3 12:46:01.129489 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 3 12:46:01.129516 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 3 12:46:01.129532 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 3 12:46:01.129548 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 3 12:46:01.129566 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 3 12:46:01.129582 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 3 12:46:01.129602 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 3 12:46:01.129619 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 3 12:46:01.129635 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 3 12:46:01.129651 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 3 12:46:01.129667 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 3 12:46:01.129682 kernel: printk: legacy bootconsole [uart0] enabled Mar 3 12:46:01.129698 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 3 12:46:01.129714 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:01.129730 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 3 12:46:01.129745 kernel: Zone ranges: Mar 3 12:46:01.129761 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 3 12:46:01.129781 kernel: DMA32 empty Mar 3 12:46:01.129797 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 3 12:46:01.129812 kernel: Device empty Mar 3 12:46:01.129827 kernel: Movable zone start for each node Mar 3 12:46:01.129843 kernel: Early memory node ranges Mar 3 12:46:01.129859 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 3 12:46:01.129874 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 3 12:46:01.129890 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 3 12:46:01.129905 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 3 12:46:01.129921 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 3 12:46:01.129936 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 3 12:46:01.129952 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 3 12:46:01.129972 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 3 12:46:01.129994 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:01.130011 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 3 12:46:01.130028 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 3 12:46:01.130044 kernel: psci: probing for conduit method from ACPI. Mar 3 12:46:01.130065 kernel: psci: PSCIv1.0 detected in firmware. Mar 3 12:46:01.130081 kernel: psci: Using standard PSCI v0.2 function IDs Mar 3 12:46:01.130097 kernel: psci: Trusted OS migration not required Mar 3 12:46:01.130113 kernel: psci: SMC Calling Convention v1.1 Mar 3 12:46:01.130130 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 3 12:46:01.130146 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 3 12:46:01.130163 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 3 12:46:01.130190 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 3 12:46:01.130214 kernel: Detected PIPT I-cache on CPU0 Mar 3 12:46:01.130232 kernel: CPU features: detected: GIC system register CPU interface Mar 3 12:46:01.130250 kernel: CPU features: detected: Spectre-v2 Mar 3 12:46:01.130272 kernel: CPU features: detected: Spectre-v3a Mar 3 12:46:01.130289 kernel: CPU features: detected: Spectre-BHB Mar 3 12:46:01.130306 kernel: CPU features: detected: ARM erratum 1742098 Mar 3 12:46:01.130330 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 3 12:46:01.130353 kernel: alternatives: applying boot alternatives Mar 3 12:46:01.130371 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:01.130389 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 12:46:01.130405 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 12:46:01.130422 kernel: Fallback order for Node 0: 0 Mar 3 12:46:01.130439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 3 12:46:01.130490 kernel: Policy zone: Normal Mar 3 12:46:01.130513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 12:46:01.130529 kernel: software IO TLB: area num 2. Mar 3 12:46:01.130546 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Mar 3 12:46:01.130562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 3 12:46:01.130579 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 12:46:01.130605 kernel: rcu: RCU event tracing is enabled. Mar 3 12:46:01.130627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 3 12:46:01.130644 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 12:46:01.130661 kernel: Tracing variant of Tasks RCU enabled. Mar 3 12:46:01.130678 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 12:46:01.130694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 3 12:46:01.130716 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:01.130733 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:01.130750 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 3 12:46:01.130766 kernel: GICv3: 96 SPIs implemented Mar 3 12:46:01.130782 kernel: GICv3: 0 Extended SPIs implemented Mar 3 12:46:01.130798 kernel: Root IRQ handler: gic_handle_irq Mar 3 12:46:01.130814 kernel: GICv3: GICv3 features: 16 PPIs Mar 3 12:46:01.130830 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 3 12:46:01.130847 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 3 12:46:01.130864 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 3 12:46:01.130880 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 3 12:46:01.130898 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 3 12:46:01.130919 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 3 12:46:01.130935 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 3 12:46:01.130951 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 3 12:46:01.130968 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 12:46:01.130984 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 3 12:46:01.131001 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 3 12:46:01.131018 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 3 12:46:01.131034 kernel: Console: colour dummy device 80x25 Mar 3 12:46:01.131051 kernel: printk: legacy console [tty1] enabled Mar 3 12:46:01.131069 kernel: ACPI: Core revision 20240827 Mar 3 12:46:01.131086 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 3 12:46:01.131107 kernel: pid_max: default: 32768 minimum: 301 Mar 3 12:46:01.131124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 12:46:01.131140 kernel: landlock: Up and running. Mar 3 12:46:01.131157 kernel: SELinux: Initializing. Mar 3 12:46:01.131174 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:01.131191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:01.131208 kernel: rcu: Hierarchical SRCU implementation. Mar 3 12:46:01.131225 kernel: rcu: Max phase no-delay instances is 400. Mar 3 12:46:01.131246 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 12:46:01.131264 kernel: Remapping and enabling EFI services. Mar 3 12:46:01.131281 kernel: smp: Bringing up secondary CPUs ... Mar 3 12:46:01.131298 kernel: Detected PIPT I-cache on CPU1 Mar 3 12:46:01.132858 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 3 12:46:01.132896 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 3 12:46:01.132917 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 3 12:46:01.132935 kernel: smp: Brought up 1 node, 2 CPUs Mar 3 12:46:01.132952 kernel: SMP: Total of 2 processors activated. Mar 3 12:46:01.135536 kernel: CPU: All CPU(s) started at EL1 Mar 3 12:46:01.135566 kernel: CPU features: detected: 32-bit EL0 Support Mar 3 12:46:01.135584 kernel: CPU features: detected: 32-bit EL1 Support Mar 3 12:46:01.135606 kernel: CPU features: detected: CRC32 instructions Mar 3 12:46:01.135625 kernel: alternatives: applying system-wide alternatives Mar 3 12:46:01.135645 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 3 12:46:01.135663 kernel: devtmpfs: initialized Mar 3 12:46:01.135681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 12:46:01.135704 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 3 12:46:01.135721 kernel: 16880 pages in range for non-PLT usage Mar 3 12:46:01.135739 kernel: 508400 pages in range for PLT usage Mar 3 12:46:01.135757 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 12:46:01.135775 kernel: SMBIOS 3.0.0 present. Mar 3 12:46:01.135793 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 3 12:46:01.135811 kernel: DMI: Memory slots populated: 0/0 Mar 3 12:46:01.135828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 12:46:01.135847 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 3 12:46:01.135869 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 3 12:46:01.135887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 3 12:46:01.135905 kernel: audit: initializing netlink subsys (disabled) Mar 3 12:46:01.135923 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Mar 3 12:46:01.135941 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 12:46:01.135958 kernel: cpuidle: using governor menu Mar 3 12:46:01.135976 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 3 12:46:01.135994 kernel: ASID allocator initialised with 65536 entries Mar 3 12:46:01.136012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 12:46:01.136033 kernel: Serial: AMBA PL011 UART driver Mar 3 12:46:01.136051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 12:46:01.136068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 12:46:01.136086 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 3 12:46:01.136104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 3 12:46:01.136122 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 12:46:01.136139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 12:46:01.136157 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 3 12:46:01.136175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 3 12:46:01.136197 kernel: ACPI: Added _OSI(Module Device) Mar 3 12:46:01.136215 kernel: ACPI: Added _OSI(Processor Device) Mar 3 12:46:01.136232 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 12:46:01.136250 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 12:46:01.136268 kernel: ACPI: Interpreter enabled Mar 3 12:46:01.136285 kernel: ACPI: Using GIC for interrupt routing Mar 3 12:46:01.136303 kernel: ACPI: MCFG table detected, 1 entries Mar 3 12:46:01.136320 kernel: ACPI: CPU0 has been hot-added Mar 3 12:46:01.136338 kernel: ACPI: CPU1 has been hot-added Mar 3 12:46:01.136360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 3 12:46:01.136710 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 12:46:01.136928 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 3 12:46:01.137115 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 3 12:46:01.137329 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 3 12:46:01.137552 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 3 12:46:01.137579 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 3 12:46:01.137606 kernel: acpiphp: Slot [1] registered Mar 3 12:46:01.137624 kernel: acpiphp: Slot [2] registered Mar 3 12:46:01.137642 kernel: acpiphp: Slot [3] registered Mar 3 12:46:01.137659 kernel: acpiphp: Slot [4] registered Mar 3 12:46:01.137677 kernel: acpiphp: Slot [5] registered Mar 3 12:46:01.137694 kernel: acpiphp: Slot [6] registered Mar 3 12:46:01.137712 kernel: acpiphp: Slot [7] registered Mar 3 12:46:01.137730 kernel: acpiphp: Slot [8] registered Mar 3 12:46:01.137748 kernel: acpiphp: Slot [9] registered Mar 3 12:46:01.137766 kernel: acpiphp: Slot [10] registered Mar 3 12:46:01.137787 kernel: acpiphp: Slot [11] registered Mar 3 12:46:01.137805 kernel: acpiphp: Slot [12] registered Mar 3 12:46:01.137822 kernel: acpiphp: Slot [13] registered Mar 3 12:46:01.137840 kernel: acpiphp: Slot [14] registered Mar 3 12:46:01.137857 kernel: acpiphp: Slot [15] registered Mar 3 12:46:01.137874 kernel: acpiphp: Slot [16] registered Mar 3 12:46:01.137892 kernel: acpiphp: Slot [17] registered Mar 3 12:46:01.137909 kernel: acpiphp: Slot [18] registered Mar 3 12:46:01.137927 kernel: acpiphp: Slot [19] registered Mar 3 12:46:01.137948 kernel: acpiphp: Slot [20] registered Mar 3 12:46:01.137965 kernel: acpiphp: Slot [21] registered Mar 3 12:46:01.137983 kernel: acpiphp: Slot [22] registered Mar 3 12:46:01.138000 kernel: acpiphp: Slot [23] registered Mar 3 12:46:01.138018 kernel: acpiphp: Slot [24] registered Mar 3 12:46:01.138035 kernel: acpiphp: Slot [25] registered Mar 3 12:46:01.138053 kernel: acpiphp: Slot [26] registered Mar 3 12:46:01.138070 kernel: acpiphp: Slot [27] registered Mar 3 12:46:01.138088 kernel: acpiphp: Slot [28] registered Mar 3 12:46:01.138105 kernel: acpiphp: Slot [29] registered Mar 3 12:46:01.138127 kernel: acpiphp: Slot [30] registered Mar 3 12:46:01.138144 kernel: acpiphp: Slot [31] registered Mar 3 12:46:01.138162 kernel: PCI host bridge to bus 0000:00 Mar 3 12:46:01.138350 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 3 12:46:01.140015 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 3 12:46:01.140206 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:01.140375 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 3 12:46:01.140677 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 3 12:46:01.140918 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 3 12:46:01.141113 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 3 12:46:01.141358 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 3 12:46:01.141632 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 3 12:46:01.141853 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:01.142073 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 3 12:46:01.142271 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 3 12:46:01.142499 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 3 12:46:01.145357 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 3 12:46:01.145635 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:01.145815 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 3 12:46:01.145984 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 3 12:46:01.146224 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:01.146252 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 3 12:46:01.146271 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 3 12:46:01.146289 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 3 12:46:01.146307 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 3 12:46:01.146325 kernel: iommu: Default domain type: Translated Mar 3 12:46:01.146342 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 3 12:46:01.146360 kernel: efivars: Registered efivars operations Mar 3 12:46:01.146377 kernel: vgaarb: loaded Mar 3 12:46:01.146401 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 3 12:46:01.146419 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 12:46:01.146436 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 12:46:01.146493 kernel: pnp: PnP ACPI init Mar 3 12:46:01.146706 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 3 12:46:01.146743 kernel: pnp: PnP ACPI: found 1 devices Mar 3 12:46:01.146766 kernel: NET: Registered PF_INET protocol family Mar 3 12:46:01.146784 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 12:46:01.146808 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 12:46:01.146826 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 12:46:01.146844 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 12:46:01.146862 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 12:46:01.146880 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 12:46:01.146898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:01.146916 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:01.146934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 12:46:01.146951 kernel: PCI: CLS 0 bytes, default 64 Mar 3 12:46:01.146973 kernel: kvm [1]: HYP mode not available Mar 3 12:46:01.146991 kernel: Initialise system trusted keyrings Mar 3 12:46:01.147008 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 12:46:01.147025 kernel: Key type asymmetric registered Mar 3 12:46:01.147044 kernel: Asymmetric key parser 'x509' registered Mar 3 12:46:01.147061 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 3 12:46:01.147079 kernel: io scheduler mq-deadline registered Mar 3 12:46:01.147097 kernel: io scheduler kyber registered Mar 3 12:46:01.147115 kernel: io scheduler bfq registered Mar 3 12:46:01.147313 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 3 12:46:01.147339 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 3 12:46:01.147358 kernel: ACPI: button: Power Button [PWRB] Mar 3 12:46:01.147376 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 3 12:46:01.147394 kernel: ACPI: button: Sleep Button [SLPB] Mar 3 12:46:01.147411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 12:46:01.147430 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 3 12:46:01.147644 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 3 12:46:01.147675 kernel: printk: legacy console [ttyS0] disabled Mar 3 12:46:01.147694 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 3 12:46:01.147712 kernel: printk: legacy console [ttyS0] enabled Mar 3 12:46:01.147730 kernel: printk: legacy bootconsole [uart0] disabled Mar 3 12:46:01.147748 kernel: thunder_xcv, ver 1.0 Mar 3 12:46:01.147766 kernel: thunder_bgx, ver 1.0 Mar 3 12:46:01.147783 kernel: nicpf, ver 1.0 Mar 3 12:46:01.147801 kernel: nicvf, ver 1.0 Mar 3 12:46:01.148002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 3 12:46:01.148182 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-03T12:46:00 UTC (1772541960) Mar 3 12:46:01.148206 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 3 12:46:01.148224 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 3 12:46:01.148242 kernel: NET: Registered PF_INET6 protocol family Mar 3 12:46:01.148259 kernel: watchdog: NMI not fully supported Mar 3 12:46:01.148277 kernel: watchdog: Hard watchdog permanently disabled Mar 3 12:46:01.148294 kernel: Segment Routing with IPv6 Mar 3 12:46:01.148312 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 12:46:01.148330 kernel: NET: Registered PF_PACKET protocol family Mar 3 12:46:01.148352 kernel: Key type dns_resolver registered Mar 3 12:46:01.148370 kernel: registered taskstats version 1 Mar 3 12:46:01.148387 kernel: Loading compiled-in X.509 certificates Mar 3 12:46:01.148405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 14a741e1e2b172e51b42fe87d143cf4cae2ad92c' Mar 3 12:46:01.148422 kernel: Demotion targets for Node 0: null Mar 3 12:46:01.148440 kernel: Key type .fscrypt registered Mar 3 12:46:01.148482 kernel: Key type fscrypt-provisioning registered Mar 3 12:46:01.148500 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 12:46:01.148518 kernel: ima: Allocated hash algorithm: sha1 Mar 3 12:46:01.148541 kernel: ima: No architecture policies found Mar 3 12:46:01.148559 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 3 12:46:01.148576 kernel: clk: Disabling unused clocks Mar 3 12:46:01.148594 kernel: PM: genpd: Disabling unused power domains Mar 3 12:46:01.148611 kernel: Warning: unable to open an initial console. Mar 3 12:46:01.148629 kernel: Freeing unused kernel memory: 39552K Mar 3 12:46:01.148647 kernel: Run /init as init process Mar 3 12:46:01.148664 kernel: with arguments: Mar 3 12:46:01.148681 kernel: /init Mar 3 12:46:01.148702 kernel: with environment: Mar 3 12:46:01.148719 kernel: HOME=/ Mar 3 12:46:01.148737 kernel: TERM=linux Mar 3 12:46:01.148756 systemd[1]: Successfully made /usr/ read-only. Mar 3 12:46:01.148780 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:01.148800 systemd[1]: Detected virtualization amazon. Mar 3 12:46:01.148818 systemd[1]: Detected architecture arm64. Mar 3 12:46:01.148840 systemd[1]: Running in initrd. Mar 3 12:46:01.148859 systemd[1]: No hostname configured, using default hostname. Mar 3 12:46:01.148879 systemd[1]: Hostname set to . Mar 3 12:46:01.148897 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:01.148916 systemd[1]: Queued start job for default target initrd.target. Mar 3 12:46:01.148934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:01.148953 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:01.148973 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 12:46:01.148997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:01.149016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 12:46:01.149037 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 12:46:01.149058 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 12:46:01.149077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 12:46:01.149096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:01.149115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:01.149138 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:01.149157 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:01.149176 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:01.149195 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:01.149231 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:01.149253 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:01.149273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 12:46:01.149292 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 12:46:01.149312 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:01.149336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:01.149356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:01.149375 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:01.149394 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 12:46:01.149413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:01.149432 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 12:46:01.149479 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 12:46:01.149502 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 12:46:01.149528 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:01.149549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:01.149569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:01.149589 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:01.149610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:01.149635 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 12:46:01.149656 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 12:46:01.149717 systemd-journald[258]: Collecting audit messages is disabled. Mar 3 12:46:01.149765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 12:46:01.149791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:01.149811 kernel: Bridge firewalling registered Mar 3 12:46:01.149831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 12:46:01.149852 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:01.149873 systemd-journald[258]: Journal started Mar 3 12:46:01.149910 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2a2233d57fb450c4a28f0051c22716) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:01.101474 systemd-modules-load[260]: Inserted module 'overlay' Mar 3 12:46:01.155569 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:01.134590 systemd-modules-load[260]: Inserted module 'br_netfilter' Mar 3 12:46:01.166602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 12:46:01.175065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:01.184790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:01.190318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:01.228900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:01.231222 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 12:46:01.239745 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:01.250015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:01.257739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:01.277808 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:01.285466 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 12:46:01.337339 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:01.375650 systemd-resolved[295]: Positive Trust Anchors: Mar 3 12:46:01.375685 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:01.375745 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:01.499493 kernel: SCSI subsystem initialized Mar 3 12:46:01.507497 kernel: Loading iSCSI transport class v2.0-870. Mar 3 12:46:01.519495 kernel: iscsi: registered transport (tcp) Mar 3 12:46:01.541554 kernel: iscsi: registered transport (qla4xxx) Mar 3 12:46:01.541652 kernel: QLogic iSCSI HBA Driver Mar 3 12:46:01.572223 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:01.614891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:01.624691 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:01.665665 kernel: random: crng init done Mar 3 12:46:01.665905 systemd-resolved[295]: Defaulting to hostname 'linux'. Mar 3 12:46:01.670276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:01.673246 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:01.714103 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:01.721528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 12:46:01.807508 kernel: raid6: neonx8 gen() 6625 MB/s Mar 3 12:46:01.825485 kernel: raid6: neonx4 gen() 6624 MB/s Mar 3 12:46:01.842485 kernel: raid6: neonx2 gen() 5475 MB/s Mar 3 12:46:01.860485 kernel: raid6: neonx1 gen() 3959 MB/s Mar 3 12:46:01.877484 kernel: raid6: int64x8 gen() 3665 MB/s Mar 3 12:46:01.895484 kernel: raid6: int64x4 gen() 3720 MB/s Mar 3 12:46:01.913482 kernel: raid6: int64x2 gen() 3612 MB/s Mar 3 12:46:01.931710 kernel: raid6: int64x1 gen() 2761 MB/s Mar 3 12:46:01.931749 kernel: raid6: using algorithm neonx8 gen() 6625 MB/s Mar 3 12:46:01.950798 kernel: raid6: .... xor() 4622 MB/s, rmw enabled Mar 3 12:46:01.950837 kernel: raid6: using neon recovery algorithm Mar 3 12:46:01.958483 kernel: xor: measuring software checksum speed Mar 3 12:46:01.961072 kernel: 8regs : 11995 MB/sec Mar 3 12:46:01.961102 kernel: 32regs : 13012 MB/sec Mar 3 12:46:01.962564 kernel: arm64_neon : 8809 MB/sec Mar 3 12:46:01.962597 kernel: xor: using function: 32regs (13012 MB/sec) Mar 3 12:46:02.055494 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 12:46:02.067264 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:02.078390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:02.123273 systemd-udevd[508]: Using default interface naming scheme 'v255'. Mar 3 12:46:02.133199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:02.150620 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 12:46:02.189015 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Mar 3 12:46:02.233258 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:02.240291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:02.368976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:02.375742 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 12:46:02.543069 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 3 12:46:02.543139 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 3 12:46:02.543433 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 3 12:46:02.545881 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 3 12:46:02.546631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:02.562482 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 3 12:46:02.562804 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 3 12:46:02.563096 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 3 12:46:02.547018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:02.552246 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:02.559294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:02.571104 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:02.582862 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 12:46:02.582897 kernel: GPT:9289727 != 33554431 Mar 3 12:46:02.582920 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 12:46:02.583750 kernel: GPT:9289727 != 33554431 Mar 3 12:46:02.585951 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 12:46:02.586004 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:14:77:dd:90:11 Mar 3 12:46:02.588882 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:02.594682 (udev-worker)[559]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:02.633573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:02.650497 kernel: nvme nvme0: using unchecked data buffer Mar 3 12:46:02.755378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 3 12:46:02.820087 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:02.823062 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:02.872327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:02.876408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:02.918395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 3 12:46:02.918849 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:02.919493 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:02.920287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:02.923638 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 12:46:02.931943 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 12:46:02.975841 disk-uuid[689]: Primary Header is updated. Mar 3 12:46:02.975841 disk-uuid[689]: Secondary Entries is updated. Mar 3 12:46:02.975841 disk-uuid[689]: Secondary Header is updated. Mar 3 12:46:02.990585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:02.999483 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:03.007478 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:04.013964 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:04.014040 disk-uuid[693]: The operation has completed successfully. Mar 3 12:46:04.196902 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 12:46:04.197511 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 12:46:04.287181 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 12:46:04.309058 sh[955]: Success Mar 3 12:46:04.334581 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 12:46:04.334695 kernel: device-mapper: uevent: version 1.0.3 Mar 3 12:46:04.336731 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 12:46:04.349508 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 3 12:46:04.446501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 12:46:04.452822 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 12:46:04.474256 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 12:46:04.499483 kernel: BTRFS: device fsid 639fb782-fb4f-4fdd-a572-72667a093996 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (990) Mar 3 12:46:04.503831 kernel: BTRFS info (device dm-0): first mount of filesystem 639fb782-fb4f-4fdd-a572-72667a093996 Mar 3 12:46:04.503884 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:04.593613 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 3 12:46:04.593688 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 12:46:04.593714 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 12:46:04.614107 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 12:46:04.617056 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:04.623166 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 12:46:04.624539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 12:46:04.655321 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 12:46:04.697539 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Mar 3 12:46:04.702522 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:04.702583 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:04.732271 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:04.732343 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:04.742074 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:04.745090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 12:46:04.750618 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 12:46:04.836468 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:04.850364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:04.922265 systemd-networkd[1159]: lo: Link UP Mar 3 12:46:04.922279 systemd-networkd[1159]: lo: Gained carrier Mar 3 12:46:04.925548 systemd-networkd[1159]: Enumeration completed Mar 3 12:46:04.926263 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:04.926808 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:04.926815 systemd-networkd[1159]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:04.930816 systemd[1]: Reached target network.target - Network. Mar 3 12:46:04.944392 systemd-networkd[1159]: eth0: Link UP Mar 3 12:46:04.944399 systemd-networkd[1159]: eth0: Gained carrier Mar 3 12:46:04.944421 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:04.969525 systemd-networkd[1159]: eth0: DHCPv4 address 172.31.17.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:05.343053 ignition[1089]: Ignition 2.22.0 Mar 3 12:46:05.343083 ignition[1089]: Stage: fetch-offline Mar 3 12:46:05.346709 ignition[1089]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:05.346745 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:05.348964 ignition[1089]: Ignition finished successfully Mar 3 12:46:05.352838 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:05.362233 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 3 12:46:05.416174 ignition[1170]: Ignition 2.22.0 Mar 3 12:46:05.416507 ignition[1170]: Stage: fetch Mar 3 12:46:05.416994 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:05.417017 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:05.417351 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:05.435537 ignition[1170]: PUT result: OK Mar 3 12:46:05.438855 ignition[1170]: parsed url from cmdline: "" Mar 3 12:46:05.438973 ignition[1170]: no config URL provided Mar 3 12:46:05.438993 ignition[1170]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 12:46:05.439017 ignition[1170]: no config at "/usr/lib/ignition/user.ign" Mar 3 12:46:05.439060 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:05.443593 ignition[1170]: PUT result: OK Mar 3 12:46:05.444226 ignition[1170]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 3 12:46:05.448294 ignition[1170]: GET result: OK Mar 3 12:46:05.448527 ignition[1170]: parsing config with SHA512: 91bebb2bb3efa363fc9881cf95c62d1ca529d8d95c9c44a5b8a2468608ba94e5bd35dffedd698bae1aa342b7844368d8ba3326b60ca2104eb3d20fb69b900efe Mar 3 12:46:05.469923 unknown[1170]: fetched base config from "system" Mar 3 12:46:05.469952 unknown[1170]: fetched base config from "system" Mar 3 12:46:05.469965 unknown[1170]: fetched user config from "aws" Mar 3 12:46:05.474116 ignition[1170]: fetch: fetch complete Mar 3 12:46:05.474130 ignition[1170]: fetch: fetch passed Mar 3 12:46:05.474235 ignition[1170]: Ignition finished successfully Mar 3 12:46:05.482542 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 3 12:46:05.489625 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 12:46:05.559964 ignition[1177]: Ignition 2.22.0 Mar 3 12:46:05.560525 ignition[1177]: Stage: kargs Mar 3 12:46:05.561266 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:05.561315 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:05.561485 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:05.571029 ignition[1177]: PUT result: OK Mar 3 12:46:05.575528 ignition[1177]: kargs: kargs passed Mar 3 12:46:05.575842 ignition[1177]: Ignition finished successfully Mar 3 12:46:05.584301 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 12:46:05.588934 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 12:46:05.648919 ignition[1183]: Ignition 2.22.0 Mar 3 12:46:05.649433 ignition[1183]: Stage: disks Mar 3 12:46:05.650003 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:05.650026 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:05.650149 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:05.659676 ignition[1183]: PUT result: OK Mar 3 12:46:05.664328 ignition[1183]: disks: disks passed Mar 3 12:46:05.664431 ignition[1183]: Ignition finished successfully Mar 3 12:46:05.670337 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 12:46:05.675135 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:05.680090 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 12:46:05.685916 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:05.688329 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:05.695657 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:05.701578 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 12:46:05.757979 systemd-fsck[1191]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 12:46:05.765939 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 12:46:05.772097 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 12:46:05.897489 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f44cfd4f-a1a9-472a-86a7-c3154f299e07 r/w with ordered data mode. Quota mode: none. Mar 3 12:46:05.898283 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 12:46:05.902693 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:05.909531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:05.917036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 12:46:05.922358 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 12:46:05.922438 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 12:46:05.922506 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:05.944842 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 12:46:05.951644 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 12:46:05.967008 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1210) Mar 3 12:46:05.971333 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:05.971372 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:05.981378 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:05.981476 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:05.985214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:06.235846 initrd-setup-root[1234]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 12:46:06.255896 initrd-setup-root[1241]: cut: /sysroot/etc/group: No such file or directory Mar 3 12:46:06.266142 initrd-setup-root[1248]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 12:46:06.273498 initrd-setup-root[1255]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 12:46:06.434619 systemd-networkd[1159]: eth0: Gained IPv6LL Mar 3 12:46:06.611426 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:06.619616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 12:46:06.624803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 12:46:06.652564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 12:46:06.657481 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:06.682483 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 12:46:06.708327 ignition[1324]: INFO : Ignition 2.22.0 Mar 3 12:46:06.712776 ignition[1324]: INFO : Stage: mount Mar 3 12:46:06.712776 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:06.712776 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:06.712776 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:06.723106 ignition[1324]: INFO : PUT result: OK Mar 3 12:46:06.726348 ignition[1324]: INFO : mount: mount passed Mar 3 12:46:06.728176 ignition[1324]: INFO : Ignition finished successfully Mar 3 12:46:06.729821 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 12:46:06.738397 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 12:46:06.901642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:06.939488 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1334) Mar 3 12:46:06.943877 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:06.943919 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:06.951380 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:06.951434 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:06.954869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:07.004210 ignition[1351]: INFO : Ignition 2.22.0 Mar 3 12:46:07.006381 ignition[1351]: INFO : Stage: files Mar 3 12:46:07.008675 ignition[1351]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:07.011045 ignition[1351]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:07.013848 ignition[1351]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:07.018085 ignition[1351]: INFO : PUT result: OK Mar 3 12:46:07.023068 ignition[1351]: DEBUG : files: compiled without relabeling support, skipping Mar 3 12:46:07.025861 ignition[1351]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 12:46:07.025861 ignition[1351]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 12:46:07.044273 ignition[1351]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 12:46:07.049542 ignition[1351]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 12:46:07.053219 unknown[1351]: wrote ssh authorized keys file for user: core Mar 3 12:46:07.056555 ignition[1351]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 12:46:07.070362 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:07.070362 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 3 12:46:07.161247 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 12:46:07.305123 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:07.305123 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 12:46:07.305123 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 3 12:46:07.539014 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:07.674493 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:07.699923 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:07.699923 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:07.699923 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 3 12:46:07.715236 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 3 12:46:08.080886 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 3 12:46:08.426266 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 3 12:46:08.430994 ignition[1351]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 3 12:46:08.430994 ignition[1351]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:08.440782 ignition[1351]: INFO : files: files passed Mar 3 12:46:08.440782 ignition[1351]: INFO : Ignition finished successfully Mar 3 12:46:08.462204 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 12:46:08.468686 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 12:46:08.488935 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 12:46:08.497419 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 12:46:08.502025 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 12:46:08.532653 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:08.532653 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:08.543160 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:08.543321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:08.551999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 12:46:08.558629 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 12:46:08.653374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 12:46:08.653709 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 12:46:08.659135 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 12:46:08.663591 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 12:46:08.666197 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 12:46:08.668642 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 12:46:08.721308 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:08.728660 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 12:46:08.766532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:08.768769 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:08.769288 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 12:46:08.769592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 12:46:08.769892 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:08.770670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 12:46:08.771073 systemd[1]: Stopped target basic.target - Basic System. Mar 3 12:46:08.771437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 12:46:08.771819 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:08.772174 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:08.772503 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:08.772807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 12:46:08.773173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:08.780770 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 12:46:08.781375 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 12:46:08.782084 systemd[1]: Stopped target swap.target - Swaps. Mar 3 12:46:08.782368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 12:46:08.782676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:08.783486 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:08.783948 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:08.784196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 12:46:08.806686 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:08.807004 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 12:46:08.807645 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:08.825233 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 12:46:08.827633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:08.872765 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 12:46:08.873017 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 12:46:08.881794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 12:46:08.887511 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 12:46:08.892367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 12:46:08.895923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:08.903955 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 12:46:08.908206 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:08.922411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 12:46:08.922633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 12:46:08.953840 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 12:46:08.962853 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 12:46:08.963073 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 12:46:08.975146 ignition[1405]: INFO : Ignition 2.22.0 Mar 3 12:46:08.975146 ignition[1405]: INFO : Stage: umount Mar 3 12:46:08.979466 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:08.979466 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:08.979466 ignition[1405]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:08.987791 ignition[1405]: INFO : PUT result: OK Mar 3 12:46:08.993019 ignition[1405]: INFO : umount: umount passed Mar 3 12:46:08.995602 ignition[1405]: INFO : Ignition finished successfully Mar 3 12:46:08.997899 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 12:46:08.998099 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 12:46:09.003099 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 12:46:09.003189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 12:46:09.006679 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 12:46:09.006761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 12:46:09.013079 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 3 12:46:09.013156 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 3 12:46:09.015919 systemd[1]: Stopped target network.target - Network. Mar 3 12:46:09.019963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 12:46:09.020059 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:09.023153 systemd[1]: Stopped target paths.target - Path Units. Mar 3 12:46:09.026747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 12:46:09.031049 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:09.031177 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 12:46:09.035815 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 12:46:09.039640 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 12:46:09.039713 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:09.043946 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 12:46:09.044011 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:09.048577 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 12:46:09.048670 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 12:46:09.052675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 12:46:09.052754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:09.057123 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 12:46:09.057224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:09.062784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 12:46:09.069726 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 12:46:09.098372 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 12:46:09.098610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 12:46:09.121065 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 12:46:09.121727 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 12:46:09.121969 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 12:46:09.136238 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 12:46:09.138165 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 12:46:09.146392 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 12:46:09.146690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:09.158322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 12:46:09.161143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 12:46:09.161263 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:09.175508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 12:46:09.176134 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:09.183380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 12:46:09.183505 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:09.186624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 12:46:09.186726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:09.195685 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:09.208070 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 12:46:09.208192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:09.232709 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 12:46:09.235142 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:09.238948 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 12:46:09.239031 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:09.243871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 12:46:09.243939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:09.246668 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 12:46:09.246757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:09.254189 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 12:46:09.254273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:09.261672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 12:46:09.261767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:09.275731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 12:46:09.287109 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 12:46:09.287260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:09.291646 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 12:46:09.291756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:09.297055 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 3 12:46:09.297148 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 12:46:09.303214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 12:46:09.303312 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:09.306237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:09.306319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:09.315689 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 12:46:09.315806 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 3 12:46:09.315886 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 12:46:09.315974 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:09.319350 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 12:46:09.319588 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 12:46:09.331966 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 12:46:09.332143 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 12:46:09.337242 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 12:46:09.348651 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 12:46:09.405017 systemd[1]: Switching root. Mar 3 12:46:09.464550 systemd-journald[258]: Journal stopped Mar 3 12:46:12.044938 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Mar 3 12:46:12.045051 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 12:46:12.045093 kernel: SELinux: policy capability open_perms=1 Mar 3 12:46:12.045122 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 12:46:12.045151 kernel: SELinux: policy capability always_check_network=0 Mar 3 12:46:12.045199 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 12:46:12.045233 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 12:46:12.045273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 12:46:12.045303 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 12:46:12.045333 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 12:46:12.045361 kernel: audit: type=1403 audit(1772541969.923:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 12:46:12.045393 systemd[1]: Successfully loaded SELinux policy in 113.698ms. Mar 3 12:46:12.045436 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.243ms. Mar 3 12:46:12.045487 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:12.045522 systemd[1]: Detected virtualization amazon. Mar 3 12:46:12.045550 systemd[1]: Detected architecture arm64. Mar 3 12:46:12.045583 systemd[1]: Detected first boot. Mar 3 12:46:12.045614 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:12.045643 kernel: NET: Registered PF_VSOCK protocol family Mar 3 12:46:12.045673 zram_generator::config[1448]: No configuration found. Mar 3 12:46:12.045714 systemd[1]: Populated /etc with preset unit settings. Mar 3 12:46:12.045746 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 12:46:12.045776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 12:46:12.045805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 12:46:12.045836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:12.045868 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 12:46:12.045905 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 12:46:12.045936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 12:46:12.045964 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 12:46:12.045993 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 12:46:12.046023 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 12:46:12.046053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 12:46:12.046082 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 12:46:12.046115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:12.046144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:12.046171 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 12:46:12.046200 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 12:46:12.046231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 12:46:12.046262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:12.046291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 12:46:12.046321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:12.046353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:12.046381 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 12:46:12.046412 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 12:46:12.046442 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:12.046501 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 12:46:12.046539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:12.046570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:12.046598 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:12.046629 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:12.046663 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 12:46:12.046693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 12:46:12.046724 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 12:46:12.046751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:12.046780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:12.046809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:12.046838 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 12:46:12.046866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 12:46:12.046894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 12:46:12.046926 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 12:46:12.046955 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 12:46:12.046982 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 12:46:12.047010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 12:46:12.047041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 12:46:12.047069 systemd[1]: Reached target machines.target - Containers. Mar 3 12:46:12.047099 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 12:46:12.047128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:12.047160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:12.047190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 12:46:12.047217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:12.047247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:12.047275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:12.047304 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 12:46:12.047332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:12.047362 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 12:46:12.047390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 12:46:12.047422 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 12:46:12.047484 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 12:46:12.047520 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 12:46:12.047551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:12.047580 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:12.047607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:12.047634 kernel: loop: module loaded Mar 3 12:46:12.047664 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:12.047694 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 12:46:12.047728 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 12:46:12.047756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:12.047789 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 12:46:12.047817 systemd[1]: Stopped verity-setup.service. Mar 3 12:46:12.047847 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 12:46:12.047874 kernel: fuse: init (API version 7.41) Mar 3 12:46:12.047902 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 12:46:12.047930 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 12:46:12.047958 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 12:46:12.047986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 12:46:12.048018 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 12:46:12.048046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:12.048074 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 12:46:12.048102 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 12:46:12.048129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:12.048157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:12.048184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:12.048217 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:12.048244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 12:46:12.048277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 12:46:12.048306 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:12.048336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:12.048364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:12.048392 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 12:46:12.048419 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 12:46:12.050488 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 12:46:12.050552 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 12:46:12.050592 kernel: ACPI: bus type drm_connector registered Mar 3 12:46:12.050624 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:12.050656 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 12:46:12.050684 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 12:46:12.050713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:12.050797 systemd-journald[1531]: Collecting audit messages is disabled. Mar 3 12:46:12.050850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 12:46:12.050884 systemd-journald[1531]: Journal started Mar 3 12:46:12.050932 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2a2233d57fb450c4a28f0051c22716) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:11.297312 systemd[1]: Queued start job for default target multi-user.target. Mar 3 12:46:11.312239 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 3 12:46:12.054574 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:11.313264 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 12:46:12.071764 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 12:46:12.076492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:12.087501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:12.103505 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 12:46:12.110487 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 12:46:12.123484 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:12.123569 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 12:46:12.130124 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:12.130603 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:12.133918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:12.137531 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 12:46:12.142092 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 12:46:12.145376 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 12:46:12.152808 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 12:46:12.201882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 12:46:12.206357 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:12.212264 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 12:46:12.224860 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 12:46:12.228308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:12.257486 kernel: loop0: detected capacity change from 0 to 119840 Mar 3 12:46:12.273698 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Mar 3 12:46:12.273722 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Mar 3 12:46:12.275557 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2a2233d57fb450c4a28f0051c22716 is 71.649ms for 939 entries. Mar 3 12:46:12.275557 systemd-journald[1531]: System Journal (/var/log/journal/ec2a2233d57fb450c4a28f0051c22716) is 8M, max 195.6M, 187.6M free. Mar 3 12:46:12.369950 systemd-journald[1531]: Received client request to flush runtime journal. Mar 3 12:46:12.276191 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 12:46:12.309266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 12:46:12.316927 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 12:46:12.322214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 12:46:12.327553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:12.379664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 12:46:12.389982 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 12:46:12.416505 kernel: loop1: detected capacity change from 0 to 100632 Mar 3 12:46:12.429795 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 12:46:12.436811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:12.478696 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Mar 3 12:46:12.478737 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Mar 3 12:46:12.485997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:12.528490 kernel: loop2: detected capacity change from 0 to 209336 Mar 3 12:46:12.909934 kernel: loop3: detected capacity change from 0 to 61264 Mar 3 12:46:12.948551 kernel: loop4: detected capacity change from 0 to 119840 Mar 3 12:46:12.968538 kernel: loop5: detected capacity change from 0 to 100632 Mar 3 12:46:12.988548 kernel: loop6: detected capacity change from 0 to 209336 Mar 3 12:46:13.016474 kernel: loop7: detected capacity change from 0 to 61264 Mar 3 12:46:13.034379 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 3 12:46:13.035366 (sd-merge)[1614]: Merged extensions into '/usr'. Mar 3 12:46:13.045631 systemd[1]: Reload requested from client PID 1564 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 12:46:13.045655 systemd[1]: Reloading... Mar 3 12:46:13.244495 zram_generator::config[1649]: No configuration found. Mar 3 12:46:13.628533 systemd[1]: Reloading finished in 582 ms. Mar 3 12:46:13.653047 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 12:46:13.657186 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 12:46:13.677684 systemd[1]: Starting ensure-sysext.service... Mar 3 12:46:13.682761 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:13.690839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:13.721280 systemd[1]: Reload requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Mar 3 12:46:13.721308 systemd[1]: Reloading... Mar 3 12:46:13.762230 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 12:46:13.765597 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 12:46:13.766170 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 12:46:13.766691 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 12:46:13.770479 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 12:46:13.771229 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Mar 3 12:46:13.773571 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Mar 3 12:46:13.789367 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:13.793375 systemd-tmpfiles[1693]: Skipping /boot Mar 3 12:46:13.836876 ldconfig[1560]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 12:46:13.859976 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:13.861030 systemd-tmpfiles[1693]: Skipping /boot Mar 3 12:46:13.879622 zram_generator::config[1724]: No configuration found. Mar 3 12:46:13.891730 systemd-udevd[1694]: Using default interface naming scheme 'v255'. Mar 3 12:46:14.198422 (udev-worker)[1747]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:14.502240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 12:46:14.503123 systemd[1]: Reloading finished in 781 ms. Mar 3 12:46:14.557934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:14.563001 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 12:46:14.587577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:14.640828 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:14.645559 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 12:46:14.651801 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 12:46:14.661743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:14.671748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:14.677697 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 12:46:14.765854 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 12:46:14.778069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:14.781690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:14.787281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:14.796047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:14.801535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:14.801803 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:14.808350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:14.808725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:14.808922 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:14.818958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:14.823027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:14.825607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:14.825856 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:14.826170 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 12:46:14.891550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 12:46:14.898043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:14.899571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:14.903219 systemd[1]: Finished ensure-sysext.service. Mar 3 12:46:14.907048 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:14.922246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:14.923188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:14.945220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 12:46:14.954794 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 12:46:14.969083 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:14.970401 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:14.973772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:14.980718 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:14.981187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:15.014158 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 12:46:15.070425 augenrules[1949]: No rules Mar 3 12:46:15.071942 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:15.074827 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:15.086105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:15.120138 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 12:46:15.123665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 12:46:15.172007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:15.178568 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 12:46:15.247545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 12:46:15.293137 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 12:46:15.314551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:15.459209 systemd-networkd[1892]: lo: Link UP Mar 3 12:46:15.459727 systemd-networkd[1892]: lo: Gained carrier Mar 3 12:46:15.462862 systemd-networkd[1892]: Enumeration completed Mar 3 12:46:15.463203 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:15.464155 systemd-networkd[1892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:15.464164 systemd-networkd[1892]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:15.470528 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 12:46:15.476492 systemd-resolved[1894]: Positive Trust Anchors: Mar 3 12:46:15.476527 systemd-resolved[1894]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:15.476558 systemd-networkd[1892]: eth0: Link UP Mar 3 12:46:15.476591 systemd-resolved[1894]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:15.476825 systemd-networkd[1892]: eth0: Gained carrier Mar 3 12:46:15.476860 systemd-networkd[1892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:15.477385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 12:46:15.494811 systemd-networkd[1892]: eth0: DHCPv4 address 172.31.17.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:15.507049 systemd-resolved[1894]: Defaulting to hostname 'linux'. Mar 3 12:46:15.510867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:15.513639 systemd[1]: Reached target network.target - Network. Mar 3 12:46:15.515745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:15.518507 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:15.521068 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 12:46:15.524232 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 12:46:15.527715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 12:46:15.530568 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 12:46:15.533485 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 12:46:15.536502 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 12:46:15.536560 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:15.539215 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:15.543227 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 12:46:15.548626 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 12:46:15.554881 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 12:46:15.558226 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 12:46:15.561364 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 12:46:15.567427 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 12:46:15.570582 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 12:46:15.574900 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 12:46:15.579110 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 12:46:15.582407 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:15.584637 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:15.587868 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:15.588090 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:15.590100 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 12:46:15.594978 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 3 12:46:15.600803 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 12:46:15.609246 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 12:46:15.617092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 12:46:15.629992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 12:46:15.632584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 12:46:15.635763 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 12:46:15.643048 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:15.663162 jq[1982]: false Mar 3 12:46:15.652154 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 12:46:15.656729 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 3 12:46:15.661891 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 12:46:15.673890 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 12:46:15.683525 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 12:46:15.687867 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 12:46:15.688977 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 12:46:15.705038 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 12:46:15.719656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 12:46:15.728207 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 12:46:15.731774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 12:46:15.732179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 12:46:15.786939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 12:46:15.791211 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 12:46:15.800324 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 12:46:15.831786 jq[1993]: true Mar 3 12:46:15.850767 extend-filesystems[1983]: Found /dev/nvme0n1p6 Mar 3 12:46:15.866518 extend-filesystems[1983]: Found /dev/nvme0n1p9 Mar 3 12:46:15.875534 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 12:46:15.894611 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Mar 3 12:46:15.908005 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 12:46:15.910096 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 12:46:15.937896 dbus-daemon[1980]: [system] SELinux support is enabled Mar 3 12:46:15.961919 jq[2020]: true Mar 3 12:46:15.938415 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 12:46:15.952507 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 12:46:15.952598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 12:46:15.955581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 12:46:15.955614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 12:46:15.975164 coreos-metadata[1979]: Mar 03 12:46:15.975 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:15.982591 coreos-metadata[1979]: Mar 03 12:46:15.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 3 12:46:15.982704 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Mar 3 12:46:15.989509 coreos-metadata[1979]: Mar 03 12:46:15.982 INFO Fetch successful Mar 3 12:46:15.989509 coreos-metadata[1979]: Mar 03 12:46:15.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 3 12:46:15.989923 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1892 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:15.996779 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 3 12:46:16.004492 tar[2003]: linux-arm64/LICENSE Mar 3 12:46:16.004492 tar[2003]: linux-arm64/helm Mar 3 12:46:16.005023 extend-filesystems[2038]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 12:46:16.007583 coreos-metadata[1979]: Mar 03 12:46:16.003 INFO Fetch successful Mar 3 12:46:16.007583 coreos-metadata[1979]: Mar 03 12:46:16.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 3 12:46:16.007583 coreos-metadata[1979]: Mar 03 12:46:16.007 INFO Fetch successful Mar 3 12:46:16.007583 coreos-metadata[1979]: Mar 03 12:46:16.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 3 12:46:16.015819 coreos-metadata[1979]: Mar 03 12:46:16.015 INFO Fetch successful Mar 3 12:46:16.015819 coreos-metadata[1979]: Mar 03 12:46:16.015 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 3 12:46:16.021695 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: ---------------------------------------------------- Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: corporation. Support and training for ntp-4 are Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: available at https://www.nwtime.org/support Mar 3 12:46:16.026058 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: ---------------------------------------------------- Mar 3 12:46:16.026717 coreos-metadata[1979]: Mar 03 12:46:16.025 INFO Fetch failed with 404: resource not found Mar 3 12:46:16.026717 coreos-metadata[1979]: Mar 03 12:46:16.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 3 12:46:16.021805 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:16.021823 ntpd[1985]: ---------------------------------------------------- Mar 3 12:46:16.021841 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:16.021857 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:16.021872 ntpd[1985]: corporation. Support and training for ntp-4 are Mar 3 12:46:16.021888 ntpd[1985]: available at https://www.nwtime.org/support Mar 3 12:46:16.021904 ntpd[1985]: ---------------------------------------------------- Mar 3 12:46:16.037716 coreos-metadata[1979]: Mar 03 12:46:16.030 INFO Fetch successful Mar 3 12:46:16.037716 coreos-metadata[1979]: Mar 03 12:46:16.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 3 12:46:16.037716 coreos-metadata[1979]: Mar 03 12:46:16.032 INFO Fetch successful Mar 3 12:46:16.037716 coreos-metadata[1979]: Mar 03 12:46:16.032 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 3 12:46:16.038081 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: proto: precision = 0.096 usec (-23) Mar 3 12:46:16.038081 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: basedate set to 2026-02-19 Mar 3 12:46:16.038081 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:16.038081 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:16.038231 update_engine[1991]: I20260303 12:46:16.030496 1991 main.cc:92] Flatcar Update Engine starting Mar 3 12:46:16.033159 ntpd[1985]: proto: precision = 0.096 usec (-23) Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Listen normally on 3 eth0 172.31.17.163:123 Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: bind(21) AF_INET6 [fe80::414:77ff:fedd:9011%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:16.056338 ntpd[1985]: 3 Mar 12:46:16 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::414:77ff:fedd:9011%2]:123 Mar 3 12:46:16.037864 ntpd[1985]: basedate set to 2026-02-19 Mar 3 12:46:16.051944 systemd-coredump[2045]: Process 1985 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 12:46:16.057076 coreos-metadata[1979]: Mar 03 12:46:16.039 INFO Fetch successful Mar 3 12:46:16.057076 coreos-metadata[1979]: Mar 03 12:46:16.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 3 12:46:16.037896 ntpd[1985]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:16.054288 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 3 12:46:16.038073 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:16.038117 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:16.038398 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:16.038440 ntpd[1985]: Listen normally on 3 eth0 172.31.17.163:123 Mar 3 12:46:16.038522 ntpd[1985]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:16.038570 ntpd[1985]: bind(21) AF_INET6 [fe80::414:77ff:fedd:9011%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:16.038608 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::414:77ff:fedd:9011%2]:123 Mar 3 12:46:16.060475 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 3 12:46:16.063477 coreos-metadata[1979]: Mar 03 12:46:16.063 INFO Fetch successful Mar 3 12:46:16.063477 coreos-metadata[1979]: Mar 03 12:46:16.063 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 3 12:46:16.066351 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 3 12:46:16.072138 coreos-metadata[1979]: Mar 03 12:46:16.070 INFO Fetch successful Mar 3 12:46:16.081134 systemd[1]: Started systemd-coredump@0-2045-0.service - Process Core Dump (PID 2045/UID 0). Mar 3 12:46:16.085927 systemd[1]: Started update-engine.service - Update Engine. Mar 3 12:46:16.093728 update_engine[1991]: I20260303 12:46:16.090483 1991 update_check_scheduler.cc:74] Next update check in 9m32s Mar 3 12:46:16.094253 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 12:46:16.203530 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 3 12:46:16.214035 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 3 12:46:16.214035 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 3 12:46:16.214035 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 3 12:46:16.226680 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Mar 3 12:46:16.225970 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 12:46:16.226406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 12:46:16.259476 bash[2067]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:16.270540 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 12:46:16.286789 systemd[1]: Starting sshkeys.service... Mar 3 12:46:16.289108 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 3 12:46:16.300730 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 12:46:16.454438 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 3 12:46:16.463984 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 3 12:46:16.483776 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (Power Button) Mar 3 12:46:16.483820 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 3 12:46:16.484901 systemd-logind[1990]: New seat seat0. Mar 3 12:46:16.491155 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 12:46:16.683873 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 3 12:46:16.700117 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 3 12:46:16.705042 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2037 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:16.718919 systemd[1]: Starting polkit.service - Authorization Manager... Mar 3 12:46:16.965394 coreos-metadata[2110]: Mar 03 12:46:16.962 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:16.982120 coreos-metadata[2110]: Mar 03 12:46:16.978 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 3 12:46:16.988808 coreos-metadata[2110]: Mar 03 12:46:16.988 INFO Fetch successful Mar 3 12:46:16.988808 coreos-metadata[2110]: Mar 03 12:46:16.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 3 12:46:16.994643 systemd-networkd[1892]: eth0: Gained IPv6LL Mar 3 12:46:17.000661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 12:46:17.002684 coreos-metadata[2110]: Mar 03 12:46:17.000 INFO Fetch successful Mar 3 12:46:17.006591 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 12:46:17.011920 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 3 12:46:17.016758 unknown[2110]: wrote ssh authorized keys file for user: core Mar 3 12:46:17.019037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:17.029203 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 12:46:17.066572 containerd[2015]: time="2026-03-03T12:46:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 12:46:17.074612 containerd[2015]: time="2026-03-03T12:46:17.070608082Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 12:46:17.083831 systemd-coredump[2048]: Process 1985 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1985: #0 0x0000aaaad0c10b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaad0bbfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaad0bc0240 n/a (ntpd + 0x10240) #3 0x0000aaaad0bbbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaad0bbd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaad0bc5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaad0bb738c n/a (ntpd + 0x738c) #7 0x0000ffffb08b2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffb08b2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaad0bb73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 3 12:46:17.087562 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 12:46:17.087891 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 12:46:17.105999 systemd[1]: systemd-coredump@0-2045-0.service: Deactivated successfully. Mar 3 12:46:17.152122 update-ssh-keys[2171]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:17.153542 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 3 12:46:17.167603 systemd[1]: Finished sshkeys.service. Mar 3 12:46:17.185606 containerd[2015]: time="2026-03-03T12:46:17.185514922Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="43.116µs" Mar 3 12:46:17.185606 containerd[2015]: time="2026-03-03T12:46:17.185576230Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 12:46:17.185784 containerd[2015]: time="2026-03-03T12:46:17.185629234Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 12:46:17.185971 containerd[2015]: time="2026-03-03T12:46:17.185921782Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 12:46:17.186036 containerd[2015]: time="2026-03-03T12:46:17.185980486Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 12:46:17.186086 containerd[2015]: time="2026-03-03T12:46:17.186034978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:17.186725 containerd[2015]: time="2026-03-03T12:46:17.186152314Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:17.186725 containerd[2015]: time="2026-03-03T12:46:17.186188614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.187186894Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.187263454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.187297030Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.187321306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.187616722Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.188096614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.188168002Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.188194834Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.188280670Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.188977786Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 12:46:17.190196 containerd[2015]: time="2026-03-03T12:46:17.189206086Z" level=info msg="metadata content store policy set" policy=shared Mar 3 12:46:17.198186 containerd[2015]: time="2026-03-03T12:46:17.197951278Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 12:46:17.198186 containerd[2015]: time="2026-03-03T12:46:17.198066238Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 12:46:17.198186 containerd[2015]: time="2026-03-03T12:46:17.198117610Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 12:46:17.198186 containerd[2015]: time="2026-03-03T12:46:17.198147238Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198213766Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198266830Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198314398Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198343762Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198374002Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198400690Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 12:46:17.198486 containerd[2015]: time="2026-03-03T12:46:17.198427486Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199510462Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199767814Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199811914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199845766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199877086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199903642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199930870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199957834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.199983010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.200009998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.200035882Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.200079754Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.201656806Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.201707494Z" level=info msg="Start snapshots syncer" Mar 3 12:46:17.204931 containerd[2015]: time="2026-03-03T12:46:17.201771538Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 12:46:17.205705 containerd[2015]: time="2026-03-03T12:46:17.202215574Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 12:46:17.205705 containerd[2015]: time="2026-03-03T12:46:17.202306210Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.202403674Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204705730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204764926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204794062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204820150Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204851314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204878098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204905998Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204961222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.204992446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.205022710Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.205089118Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.205121902Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:17.205897 containerd[2015]: time="2026-03-03T12:46:17.205162654Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205193518Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205215262Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205240858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205269130Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205436206Z" level=info msg="runtime interface created" Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205474762Z" level=info msg="created NRI interface" Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205498270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205531474Z" level=info msg="Connect containerd service" Mar 3 12:46:17.217640 containerd[2015]: time="2026-03-03T12:46:17.205576246Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 12:46:17.208928 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:17.212977 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:17.221111 containerd[2015]: time="2026-03-03T12:46:17.220917875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:46:17.344515 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 12:46:17.393185 locksmithd[2050]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 12:46:17.428350 ntpd[2198]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: ---------------------------------------------------- Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: corporation. Support and training for ntp-4 are Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: available at https://www.nwtime.org/support Mar 3 12:46:17.431950 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: ---------------------------------------------------- Mar 3 12:46:17.428476 ntpd[2198]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:17.428496 ntpd[2198]: ---------------------------------------------------- Mar 3 12:46:17.428513 ntpd[2198]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:17.428529 ntpd[2198]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:17.428544 ntpd[2198]: corporation. Support and training for ntp-4 are Mar 3 12:46:17.428560 ntpd[2198]: available at https://www.nwtime.org/support Mar 3 12:46:17.428576 ntpd[2198]: ---------------------------------------------------- Mar 3 12:46:17.438717 ntpd[2198]: proto: precision = 0.108 usec (-23) Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: proto: precision = 0.108 usec (-23) Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: basedate set to 2026-02-19 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen normally on 3 eth0 172.31.17.163:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listen normally on 5 eth0 [fe80::414:77ff:fedd:9011%2]:123 Mar 3 12:46:17.443954 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:17.439050 ntpd[2198]: basedate set to 2026-02-19 Mar 3 12:46:17.439070 ntpd[2198]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:17.439185 ntpd[2198]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:17.439227 ntpd[2198]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:17.439532 ntpd[2198]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:17.439577 ntpd[2198]: Listen normally on 3 eth0 172.31.17.163:123 Mar 3 12:46:17.439621 ntpd[2198]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:17.439663 ntpd[2198]: Listen normally on 5 eth0 [fe80::414:77ff:fedd:9011%2]:123 Mar 3 12:46:17.439704 ntpd[2198]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:17.466269 polkitd[2158]: Started polkitd version 126 Mar 3 12:46:17.472299 ntpd[2198]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:17.474761 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:17.474761 ntpd[2198]: 3 Mar 12:46:17 ntpd[2198]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:17.472359 ntpd[2198]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:17.487477 amazon-ssm-agent[2166]: Initializing new seelog logger Mar 3 12:46:17.492815 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Mar 3 12:46:17.497083 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.497083 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.497083 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 processing appconfig overrides Mar 3 12:46:17.497083 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.497083 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.4963 INFO Proxy environment variables: Mar 3 12:46:17.500582 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.500582 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 processing appconfig overrides Mar 3 12:46:17.500888 polkitd[2158]: Loading rules from directory /etc/polkit-1/rules.d Mar 3 12:46:17.501219 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.501316 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.501654 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 processing appconfig overrides Mar 3 12:46:17.503612 polkitd[2158]: Loading rules from directory /run/polkit-1/rules.d Mar 3 12:46:17.504032 polkitd[2158]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:17.506074 polkitd[2158]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 3 12:46:17.506168 polkitd[2158]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:17.506254 polkitd[2158]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 3 12:46:17.509934 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.509934 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:17.509934 amazon-ssm-agent[2166]: 2026/03/03 12:46:17 processing appconfig overrides Mar 3 12:46:17.510956 polkitd[2158]: Finished loading, compiling and executing 2 rules Mar 3 12:46:17.511817 systemd[1]: Started polkit.service - Authorization Manager. Mar 3 12:46:17.517915 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 3 12:46:17.520538 polkitd[2158]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 3 12:46:17.570443 systemd-hostnamed[2037]: Hostname set to (transient) Mar 3 12:46:17.570836 systemd-resolved[1894]: System hostname changed to 'ip-172-31-17-163'. Mar 3 12:46:17.601488 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.4964 INFO https_proxy: Mar 3 12:46:17.700578 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.4964 INFO http_proxy: Mar 3 12:46:17.770591 containerd[2015]: time="2026-03-03T12:46:17.769768489Z" level=info msg="Start subscribing containerd event" Mar 3 12:46:17.774522 containerd[2015]: time="2026-03-03T12:46:17.770547697Z" level=info msg="Start recovering state" Mar 3 12:46:17.774998 containerd[2015]: time="2026-03-03T12:46:17.774953533Z" level=info msg="Start event monitor" Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775144837Z" level=info msg="Start cni network conf syncer for default" Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775230157Z" level=info msg="Start streaming server" Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775253053Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775294993Z" level=info msg="runtime interface starting up..." Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775310737Z" level=info msg="starting plugins..." Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775372717Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775303321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775571725Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 12:46:17.775722 containerd[2015]: time="2026-03-03T12:46:17.775680109Z" level=info msg="containerd successfully booted in 0.712726s" Mar 3 12:46:17.775799 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 12:46:17.803466 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.4964 INFO no_proxy: Mar 3 12:46:17.901276 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.5002 INFO Checking if agent identity type OnPrem can be assumed Mar 3 12:46:17.999967 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.5003 INFO Checking if agent identity type EC2 can be assumed Mar 3 12:46:18.099323 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7116 INFO Agent will take identity from EC2 Mar 3 12:46:18.198392 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7196 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 3 12:46:18.297415 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7196 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 3 12:46:18.303094 tar[2003]: linux-arm64/README.md Mar 3 12:46:18.343296 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 12:46:18.396623 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7197 INFO [amazon-ssm-agent] Starting Core Agent Mar 3 12:46:18.496856 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7197 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 3 12:46:18.550081 amazon-ssm-agent[2166]: 2026/03/03 12:46:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:18.550081 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:18.550231 amazon-ssm-agent[2166]: 2026/03/03 12:46:18 processing appconfig overrides Mar 3 12:46:18.571730 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 12:46:18.577317 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7197 INFO [Registrar] Starting registrar module Mar 3 12:46:18.577317 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7276 INFO [EC2Identity] Checking disk for registration info Mar 3 12:46:18.577527 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7277 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 3 12:46:18.577527 amazon-ssm-agent[2166]: 2026-03-03 12:46:17.7277 INFO [EC2Identity] Generating registration keypair Mar 3 12:46:18.577527 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5070 INFO [EC2Identity] Checking write access before registering Mar 3 12:46:18.577527 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5077 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 3 12:46:18.577527 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5496 INFO [EC2Identity] EC2 registration was successful. Mar 3 12:46:18.577742 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5497 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 3 12:46:18.577742 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5498 INFO [CredentialRefresher] credentialRefresher has started Mar 3 12:46:18.577742 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5498 INFO [CredentialRefresher] Starting credentials refresher loop Mar 3 12:46:18.577742 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5767 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 3 12:46:18.577742 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5771 INFO [CredentialRefresher] Credentials ready Mar 3 12:46:18.596937 amazon-ssm-agent[2166]: 2026-03-03 12:46:18.5777 INFO [CredentialRefresher] Next credential rotation will be in 29.9999849333 minutes Mar 3 12:46:18.611593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 12:46:18.617284 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 12:46:18.631125 systemd[1]: Started sshd@0-172.31.17.163:22-20.161.92.111:44580.service - OpenSSH per-connection server daemon (20.161.92.111:44580). Mar 3 12:46:18.642109 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 12:46:18.644201 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 12:46:18.657372 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 12:46:18.690586 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 12:46:18.698428 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 12:46:18.709336 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 12:46:18.712237 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 12:46:19.170528 sshd[2247]: Accepted publickey for core from 20.161.92.111 port 44580 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:19.173022 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:19.199529 systemd-logind[1990]: New session 1 of user core. Mar 3 12:46:19.202286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 12:46:19.208838 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 12:46:19.264163 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 12:46:19.274949 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 12:46:19.295349 (systemd)[2259]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 12:46:19.300604 systemd-logind[1990]: New session c1 of user core. Mar 3 12:46:19.605830 systemd[2259]: Queued start job for default target default.target. Mar 3 12:46:19.606935 amazon-ssm-agent[2166]: 2026-03-03 12:46:19.6054 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 3 12:46:19.614441 systemd[2259]: Created slice app.slice - User Application Slice. Mar 3 12:46:19.614613 systemd[2259]: Reached target paths.target - Paths. Mar 3 12:46:19.614706 systemd[2259]: Reached target timers.target - Timers. Mar 3 12:46:19.618657 systemd[2259]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 12:46:19.644387 systemd[2259]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 12:46:19.646117 systemd[2259]: Reached target sockets.target - Sockets. Mar 3 12:46:19.646475 systemd[2259]: Reached target basic.target - Basic System. Mar 3 12:46:19.646575 systemd[2259]: Reached target default.target - Main User Target. Mar 3 12:46:19.646639 systemd[2259]: Startup finished in 333ms. Mar 3 12:46:19.646775 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 12:46:19.655762 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 12:46:19.708503 amazon-ssm-agent[2166]: 2026-03-03 12:46:19.6129 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started Mar 3 12:46:19.808532 amazon-ssm-agent[2166]: 2026-03-03 12:46:19.6130 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 3 12:46:19.928874 systemd[1]: Started sshd@1-172.31.17.163:22-20.161.92.111:44582.service - OpenSSH per-connection server daemon (20.161.92.111:44582). Mar 3 12:46:20.428673 sshd[2283]: Accepted publickey for core from 20.161.92.111 port 44582 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:20.430509 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:20.439240 systemd-logind[1990]: New session 2 of user core. Mar 3 12:46:20.446754 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 12:46:20.689568 sshd[2286]: Connection closed by 20.161.92.111 port 44582 Mar 3 12:46:20.690257 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:20.698625 systemd[1]: sshd@1-172.31.17.163:22-20.161.92.111:44582.service: Deactivated successfully. Mar 3 12:46:20.702205 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 12:46:20.704789 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Mar 3 12:46:20.708176 systemd-logind[1990]: Removed session 2. Mar 3 12:46:20.777906 systemd[1]: Started sshd@2-172.31.17.163:22-20.161.92.111:44470.service - OpenSSH per-connection server daemon (20.161.92.111:44470). Mar 3 12:46:21.236203 sshd[2292]: Accepted publickey for core from 20.161.92.111 port 44470 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:21.238133 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:21.245913 systemd-logind[1990]: New session 3 of user core. Mar 3 12:46:21.255698 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 12:46:21.477188 sshd[2295]: Connection closed by 20.161.92.111 port 44470 Mar 3 12:46:21.478804 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:21.485437 systemd[1]: sshd@2-172.31.17.163:22-20.161.92.111:44470.service: Deactivated successfully. Mar 3 12:46:21.489049 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 12:46:21.491179 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Mar 3 12:46:21.494801 systemd-logind[1990]: Removed session 3. Mar 3 12:46:22.258318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:22.264937 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 12:46:22.270427 systemd[1]: Startup finished in 3.827s (kernel) + 9.180s (initrd) + 12.459s (userspace) = 25.467s. Mar 3 12:46:22.273227 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:24.076523 kubelet[2305]: E0303 12:46:24.076388 2305 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:24.081567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:24.081895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:24.082930 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 259.5M memory peak. Mar 3 12:46:24.623408 systemd-resolved[1894]: Clock change detected. Flushing caches. Mar 3 12:46:31.777744 systemd[1]: Started sshd@3-172.31.17.163:22-20.161.92.111:60594.service - OpenSSH per-connection server daemon (20.161.92.111:60594). Mar 3 12:46:32.261972 sshd[2317]: Accepted publickey for core from 20.161.92.111 port 60594 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:32.264352 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:32.271708 systemd-logind[1990]: New session 4 of user core. Mar 3 12:46:32.278603 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 12:46:32.521556 sshd[2320]: Connection closed by 20.161.92.111 port 60594 Mar 3 12:46:32.522334 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:32.529727 systemd[1]: sshd@3-172.31.17.163:22-20.161.92.111:60594.service: Deactivated successfully. Mar 3 12:46:32.533951 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 12:46:32.537235 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Mar 3 12:46:32.539908 systemd-logind[1990]: Removed session 4. Mar 3 12:46:32.610408 systemd[1]: Started sshd@4-172.31.17.163:22-20.161.92.111:60604.service - OpenSSH per-connection server daemon (20.161.92.111:60604). Mar 3 12:46:33.064068 sshd[2326]: Accepted publickey for core from 20.161.92.111 port 60604 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:33.066367 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:33.074230 systemd-logind[1990]: New session 5 of user core. Mar 3 12:46:33.086375 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 12:46:33.297743 sshd[2329]: Connection closed by 20.161.92.111 port 60604 Mar 3 12:46:33.298558 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:33.305630 systemd[1]: sshd@4-172.31.17.163:22-20.161.92.111:60604.service: Deactivated successfully. Mar 3 12:46:33.312043 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 12:46:33.314104 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Mar 3 12:46:33.317669 systemd-logind[1990]: Removed session 5. Mar 3 12:46:33.395040 systemd[1]: Started sshd@5-172.31.17.163:22-20.161.92.111:60620.service - OpenSSH per-connection server daemon (20.161.92.111:60620). Mar 3 12:46:33.856846 sshd[2335]: Accepted publickey for core from 20.161.92.111 port 60620 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:33.858957 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:33.866060 systemd-logind[1990]: New session 6 of user core. Mar 3 12:46:33.877387 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 12:46:34.097196 sshd[2338]: Connection closed by 20.161.92.111 port 60620 Mar 3 12:46:34.097594 sshd-session[2335]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:34.105656 systemd[1]: sshd@5-172.31.17.163:22-20.161.92.111:60620.service: Deactivated successfully. Mar 3 12:46:34.108838 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 12:46:34.112652 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Mar 3 12:46:34.115702 systemd-logind[1990]: Removed session 6. Mar 3 12:46:34.188570 systemd[1]: Started sshd@6-172.31.17.163:22-20.161.92.111:60636.service - OpenSSH per-connection server daemon (20.161.92.111:60636). Mar 3 12:46:34.355797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:34.358418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:34.647959 sshd[2344]: Accepted publickey for core from 20.161.92.111 port 60636 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:34.651200 sshd-session[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:34.661570 systemd-logind[1990]: New session 7 of user core. Mar 3 12:46:34.666566 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 12:46:34.691106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:34.706642 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:34.776936 kubelet[2356]: E0303 12:46:34.776856 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:34.785314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:34.785623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:34.786544 systemd[1]: kubelet.service: Consumed 312ms CPU time, 105.5M memory peak. Mar 3 12:46:34.826568 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 12:46:34.827687 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:34.846501 sudo[2363]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:34.925381 sshd[2354]: Connection closed by 20.161.92.111 port 60636 Mar 3 12:46:34.926430 sshd-session[2344]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:34.934455 systemd[1]: sshd@6-172.31.17.163:22-20.161.92.111:60636.service: Deactivated successfully. Mar 3 12:46:34.938029 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 12:46:34.939857 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Mar 3 12:46:34.942670 systemd-logind[1990]: Removed session 7. Mar 3 12:46:35.024337 systemd[1]: Started sshd@7-172.31.17.163:22-20.161.92.111:60648.service - OpenSSH per-connection server daemon (20.161.92.111:60648). Mar 3 12:46:35.486041 sshd[2369]: Accepted publickey for core from 20.161.92.111 port 60648 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:35.488484 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:35.495744 systemd-logind[1990]: New session 8 of user core. Mar 3 12:46:35.502788 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 12:46:35.650642 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 12:46:35.651322 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:35.658787 sudo[2374]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:35.668381 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 12:46:35.669790 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:35.686086 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:35.745550 augenrules[2396]: No rules Mar 3 12:46:35.748400 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:35.750236 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:35.751776 sudo[2373]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:35.829923 sshd[2372]: Connection closed by 20.161.92.111 port 60648 Mar 3 12:46:35.830710 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:35.836994 systemd[1]: sshd@7-172.31.17.163:22-20.161.92.111:60648.service: Deactivated successfully. Mar 3 12:46:35.840616 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 12:46:35.843624 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Mar 3 12:46:35.847457 systemd-logind[1990]: Removed session 8. Mar 3 12:46:35.926265 systemd[1]: Started sshd@8-172.31.17.163:22-20.161.92.111:60660.service - OpenSSH per-connection server daemon (20.161.92.111:60660). Mar 3 12:46:36.396341 sshd[2405]: Accepted publickey for core from 20.161.92.111 port 60660 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:36.398486 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:36.406184 systemd-logind[1990]: New session 9 of user core. Mar 3 12:46:36.426410 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 12:46:36.560418 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 12:46:36.561007 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:37.483101 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 12:46:37.507656 (dockerd)[2426]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 12:46:38.018827 dockerd[2426]: time="2026-03-03T12:46:38.018711062Z" level=info msg="Starting up" Mar 3 12:46:38.020513 dockerd[2426]: time="2026-03-03T12:46:38.020447834Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 12:46:38.040007 dockerd[2426]: time="2026-03-03T12:46:38.039936350Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 12:46:38.094092 dockerd[2426]: time="2026-03-03T12:46:38.093842954Z" level=info msg="Loading containers: start." Mar 3 12:46:38.108219 kernel: Initializing XFRM netlink socket Mar 3 12:46:38.478949 (udev-worker)[2449]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:38.553879 systemd-networkd[1892]: docker0: Link UP Mar 3 12:46:38.558776 dockerd[2426]: time="2026-03-03T12:46:38.558704885Z" level=info msg="Loading containers: done." Mar 3 12:46:38.582186 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1964753083-merged.mount: Deactivated successfully. Mar 3 12:46:38.587155 dockerd[2426]: time="2026-03-03T12:46:38.587060801Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 12:46:38.587347 dockerd[2426]: time="2026-03-03T12:46:38.587212649Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 12:46:38.587403 dockerd[2426]: time="2026-03-03T12:46:38.587361149Z" level=info msg="Initializing buildkit" Mar 3 12:46:38.626322 dockerd[2426]: time="2026-03-03T12:46:38.626219453Z" level=info msg="Completed buildkit initialization" Mar 3 12:46:38.643106 dockerd[2426]: time="2026-03-03T12:46:38.643038605Z" level=info msg="Daemon has completed initialization" Mar 3 12:46:38.643541 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 12:46:38.643893 dockerd[2426]: time="2026-03-03T12:46:38.643639145Z" level=info msg="API listen on /run/docker.sock" Mar 3 12:46:39.826519 containerd[2015]: time="2026-03-03T12:46:39.826453615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 3 12:46:40.442600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617571352.mount: Deactivated successfully. Mar 3 12:46:41.892359 containerd[2015]: time="2026-03-03T12:46:41.892292301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.894088 containerd[2015]: time="2026-03-03T12:46:41.894035169Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 3 12:46:41.895191 containerd[2015]: time="2026-03-03T12:46:41.894914973Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.900925 containerd[2015]: time="2026-03-03T12:46:41.900846033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:41.902918 containerd[2015]: time="2026-03-03T12:46:41.902653893Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.07614125s" Mar 3 12:46:41.902918 containerd[2015]: time="2026-03-03T12:46:41.902709261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 3 12:46:41.903729 containerd[2015]: time="2026-03-03T12:46:41.903659985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 3 12:46:43.339411 containerd[2015]: time="2026-03-03T12:46:43.339343652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.341056 containerd[2015]: time="2026-03-03T12:46:43.341001980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 3 12:46:43.342196 containerd[2015]: time="2026-03-03T12:46:43.342017384Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.349171 containerd[2015]: time="2026-03-03T12:46:43.347833377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:43.349938 containerd[2015]: time="2026-03-03T12:46:43.349892565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.446015692s" Mar 3 12:46:43.350069 containerd[2015]: time="2026-03-03T12:46:43.350042325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 3 12:46:43.350973 containerd[2015]: time="2026-03-03T12:46:43.350909733Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 3 12:46:44.658177 containerd[2015]: time="2026-03-03T12:46:44.657896879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.661041 containerd[2015]: time="2026-03-03T12:46:44.660986327Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 3 12:46:44.661967 containerd[2015]: time="2026-03-03T12:46:44.661898987Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.666606 containerd[2015]: time="2026-03-03T12:46:44.666528071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.668840 containerd[2015]: time="2026-03-03T12:46:44.668655239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.317432846s" Mar 3 12:46:44.668840 containerd[2015]: time="2026-03-03T12:46:44.668708555Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 3 12:46:44.669437 containerd[2015]: time="2026-03-03T12:46:44.669388235Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 3 12:46:44.855781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 12:46:44.860481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:45.285430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:45.298787 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:45.497200 kubelet[2712]: E0303 12:46:45.497070 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:45.503928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:45.505558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:45.506580 systemd[1]: kubelet.service: Consumed 324ms CPU time, 105.7M memory peak. Mar 3 12:46:46.059604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158764082.mount: Deactivated successfully. Mar 3 12:46:46.649253 containerd[2015]: time="2026-03-03T12:46:46.649199677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.651232 containerd[2015]: time="2026-03-03T12:46:46.651182209Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 3 12:46:46.653647 containerd[2015]: time="2026-03-03T12:46:46.653576401Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.657978 containerd[2015]: time="2026-03-03T12:46:46.657896353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.659484 containerd[2015]: time="2026-03-03T12:46:46.659303161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.98985681s" Mar 3 12:46:46.659484 containerd[2015]: time="2026-03-03T12:46:46.659359201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 3 12:46:46.661844 containerd[2015]: time="2026-03-03T12:46:46.661785373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 3 12:46:47.243257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727953571.mount: Deactivated successfully. Mar 3 12:46:47.800344 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 3 12:46:48.397740 containerd[2015]: time="2026-03-03T12:46:48.397659254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:48.400002 containerd[2015]: time="2026-03-03T12:46:48.399453710Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 3 12:46:48.402202 containerd[2015]: time="2026-03-03T12:46:48.402128810Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:48.408029 containerd[2015]: time="2026-03-03T12:46:48.407976650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:48.409938 containerd[2015]: time="2026-03-03T12:46:48.409876070Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.748025153s" Mar 3 12:46:48.410247 containerd[2015]: time="2026-03-03T12:46:48.409936550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 3 12:46:48.410532 containerd[2015]: time="2026-03-03T12:46:48.410486450Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 3 12:46:48.898443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741541080.mount: Deactivated successfully. Mar 3 12:46:48.911883 containerd[2015]: time="2026-03-03T12:46:48.911803732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:48.914116 containerd[2015]: time="2026-03-03T12:46:48.913723300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 3 12:46:48.916406 containerd[2015]: time="2026-03-03T12:46:48.916362136Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:48.922090 containerd[2015]: time="2026-03-03T12:46:48.922014160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:48.923255 containerd[2015]: time="2026-03-03T12:46:48.923195296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.656946ms" Mar 3 12:46:48.923343 containerd[2015]: time="2026-03-03T12:46:48.923249644Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 3 12:46:48.923894 containerd[2015]: time="2026-03-03T12:46:48.923835844Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 3 12:46:49.434192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840452847.mount: Deactivated successfully. Mar 3 12:46:50.928675 containerd[2015]: time="2026-03-03T12:46:50.928594206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:50.930900 containerd[2015]: time="2026-03-03T12:46:50.930649566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 3 12:46:50.933083 containerd[2015]: time="2026-03-03T12:46:50.933022650Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:50.938792 containerd[2015]: time="2026-03-03T12:46:50.938711454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:50.940921 containerd[2015]: time="2026-03-03T12:46:50.940710234Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.016817522s" Mar 3 12:46:50.940921 containerd[2015]: time="2026-03-03T12:46:50.940759602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 3 12:46:55.605840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 3 12:46:55.609091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:56.035718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:56.054666 (kubelet)[2879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:56.155588 kubelet[2879]: E0303 12:46:56.155507 2879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:56.162261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:56.162742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:56.165266 systemd[1]: kubelet.service: Consumed 301ms CPU time, 106.9M memory peak. Mar 3 12:46:59.698352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:59.699446 systemd[1]: kubelet.service: Consumed 301ms CPU time, 106.9M memory peak. Mar 3 12:46:59.703540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:59.760302 systemd[1]: Reload requested from client PID 2893 ('systemctl') (unit session-9.scope)... Mar 3 12:46:59.760335 systemd[1]: Reloading... Mar 3 12:46:59.995241 zram_generator::config[2940]: No configuration found. Mar 3 12:47:00.459945 systemd[1]: Reloading finished in 698 ms. Mar 3 12:47:00.559057 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 12:47:00.559316 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 12:47:00.559812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:00.559896 systemd[1]: kubelet.service: Consumed 227ms CPU time, 94.9M memory peak. Mar 3 12:47:00.564109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:00.908241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:00.930719 (kubelet)[3000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:47:01.003874 kubelet[3000]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:01.003874 kubelet[3000]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 12:47:01.003874 kubelet[3000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:01.004433 kubelet[3000]: I0303 12:47:01.003924 3000 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 12:47:02.001677 update_engine[1991]: I20260303 12:47:02.001613 1991 update_attempter.cc:509] Updating boot flags... Mar 3 12:47:02.769869 kubelet[3000]: I0303 12:47:02.769821 3000 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 3 12:47:02.770519 kubelet[3000]: I0303 12:47:02.770488 3000 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:47:02.771050 kubelet[3000]: I0303 12:47:02.771012 3000 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 12:47:02.855514 kubelet[3000]: I0303 12:47:02.855458 3000 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:47:02.861636 kubelet[3000]: E0303 12:47:02.861558 3000 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 12:47:02.885058 kubelet[3000]: I0303 12:47:02.885008 3000 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:47:02.899368 kubelet[3000]: I0303 12:47:02.899323 3000 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 3 12:47:02.901979 kubelet[3000]: I0303 12:47:02.901902 3000 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:47:02.904848 kubelet[3000]: I0303 12:47:02.902410 3000 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:47:02.904848 kubelet[3000]: I0303 12:47:02.904463 3000 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 12:47:02.904848 kubelet[3000]: I0303 12:47:02.904486 3000 container_manager_linux.go:303] "Creating device plugin manager" Mar 3 12:47:02.905864 kubelet[3000]: I0303 12:47:02.905354 3000 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:02.917657 kubelet[3000]: I0303 12:47:02.917615 3000 kubelet.go:480] "Attempting to sync node with API server" Mar 3 12:47:02.918292 kubelet[3000]: I0303 12:47:02.918262 3000 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:47:02.920450 kubelet[3000]: I0303 12:47:02.920409 3000 kubelet.go:386] "Adding apiserver pod source" Mar 3 12:47:02.923086 kubelet[3000]: I0303 12:47:02.922221 3000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:47:02.925302 kubelet[3000]: E0303 12:47:02.925228 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-163&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 12:47:02.930288 kubelet[3000]: I0303 12:47:02.928222 3000 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:47:02.930288 kubelet[3000]: I0303 12:47:02.929375 3000 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:47:02.930288 kubelet[3000]: W0303 12:47:02.929627 3000 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 12:47:02.936691 kubelet[3000]: I0303 12:47:02.936659 3000 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 3 12:47:02.936956 kubelet[3000]: I0303 12:47:02.936932 3000 server.go:1289] "Started kubelet" Mar 3 12:47:02.952849 kubelet[3000]: I0303 12:47:02.952809 3000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 12:47:02.956241 kubelet[3000]: E0303 12:47:02.956177 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 12:47:02.963292 kubelet[3000]: E0303 12:47:02.960089 3000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-163.189955941a9b105e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-163,UID:ip-172-31-17-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-163,},FirstTimestamp:2026-03-03 12:47:02.936891486 +0000 UTC m=+1.998507275,LastTimestamp:2026-03-03 12:47:02.936891486 +0000 UTC m=+1.998507275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-163,}" Mar 3 12:47:02.966161 kubelet[3000]: I0303 12:47:02.965216 3000 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:47:02.966161 kubelet[3000]: I0303 12:47:02.966032 3000 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 3 12:47:02.966448 kubelet[3000]: E0303 12:47:02.966390 3000 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:02.966941 kubelet[3000]: I0303 12:47:02.966900 3000 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 3 12:47:02.967060 kubelet[3000]: I0303 12:47:02.967003 3000 reconciler.go:26] "Reconciler: start to sync state" Mar 3 12:47:02.969808 kubelet[3000]: I0303 12:47:02.969772 3000 server.go:317] "Adding debug handlers to kubelet server" Mar 3 12:47:02.972583 kubelet[3000]: E0303 12:47:02.972535 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 12:47:02.983789 kubelet[3000]: E0303 12:47:02.974094 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="200ms" Mar 3 12:47:02.986191 kubelet[3000]: I0303 12:47:02.975555 3000 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:47:02.986191 kubelet[3000]: I0303 12:47:02.984450 3000 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:47:02.986191 kubelet[3000]: I0303 12:47:02.981095 3000 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:47:02.986191 kubelet[3000]: I0303 12:47:02.984987 3000 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:47:02.986191 kubelet[3000]: I0303 12:47:02.981422 3000 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:47:02.989801 kubelet[3000]: E0303 12:47:02.989734 3000 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 12:47:02.992795 kubelet[3000]: I0303 12:47:02.992221 3000 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:47:03.066695 kubelet[3000]: E0303 12:47:03.066515 3000 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:03.098154 kubelet[3000]: I0303 12:47:03.098092 3000 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 12:47:03.098154 kubelet[3000]: I0303 12:47:03.098126 3000 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 12:47:03.098337 kubelet[3000]: I0303 12:47:03.098176 3000 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:03.114425 kubelet[3000]: I0303 12:47:03.108410 3000 policy_none.go:49] "None policy: Start" Mar 3 12:47:03.114425 kubelet[3000]: I0303 12:47:03.108458 3000 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 3 12:47:03.114425 kubelet[3000]: I0303 12:47:03.108484 3000 state_mem.go:35] "Initializing new in-memory state store" Mar 3 12:47:03.122428 kubelet[3000]: I0303 12:47:03.121363 3000 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 3 12:47:03.131322 kubelet[3000]: I0303 12:47:03.131269 3000 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 3 12:47:03.131455 kubelet[3000]: I0303 12:47:03.131338 3000 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 3 12:47:03.131455 kubelet[3000]: I0303 12:47:03.131379 3000 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:47:03.131455 kubelet[3000]: I0303 12:47:03.131395 3000 kubelet.go:2436] "Starting kubelet main sync loop" Mar 3 12:47:03.131580 kubelet[3000]: E0303 12:47:03.131458 3000 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:47:03.153197 kubelet[3000]: E0303 12:47:03.150908 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 12:47:03.167305 kubelet[3000]: E0303 12:47:03.167250 3000 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:03.186199 kubelet[3000]: E0303 12:47:03.185857 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="400ms" Mar 3 12:47:03.232593 kubelet[3000]: E0303 12:47:03.232556 3000 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 12:47:03.246926 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 12:47:03.268082 kubelet[3000]: E0303 12:47:03.267949 3000 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:03.268938 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 12:47:03.277327 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 12:47:03.289175 kubelet[3000]: E0303 12:47:03.288733 3000 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:47:03.289175 kubelet[3000]: I0303 12:47:03.289008 3000 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 12:47:03.289175 kubelet[3000]: I0303 12:47:03.289026 3000 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:47:03.289805 kubelet[3000]: I0303 12:47:03.289782 3000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 12:47:03.293513 kubelet[3000]: E0303 12:47:03.293308 3000 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:47:03.293513 kubelet[3000]: E0303 12:47:03.293386 3000 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-163\" not found" Mar 3 12:47:03.391525 kubelet[3000]: I0303 12:47:03.391468 3000 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Mar 3 12:47:03.392043 kubelet[3000]: E0303 12:47:03.391999 3000 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Mar 3 12:47:03.455129 systemd[1]: Created slice kubepods-burstable-pod2496d34968fb25e7ec0567b58272df52.slice - libcontainer container kubepods-burstable-pod2496d34968fb25e7ec0567b58272df52.slice. Mar 3 12:47:03.468807 kubelet[3000]: E0303 12:47:03.468743 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:03.476946 systemd[1]: Created slice kubepods-burstable-podb10003e62be3516f4563cfe4bb1cff27.slice - libcontainer container kubepods-burstable-podb10003e62be3516f4563cfe4bb1cff27.slice. Mar 3 12:47:03.489685 kubelet[3000]: E0303 12:47:03.489627 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:03.495958 systemd[1]: Created slice kubepods-burstable-pod95dea088ba82c4475c23de6a1ed560ce.slice - libcontainer container kubepods-burstable-pod95dea088ba82c4475c23de6a1ed560ce.slice. Mar 3 12:47:03.500591 kubelet[3000]: E0303 12:47:03.500533 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:03.573617 kubelet[3000]: I0303 12:47:03.573551 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:03.573744 kubelet[3000]: I0303 12:47:03.573620 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:03.573744 kubelet[3000]: I0303 12:47:03.573663 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:03.573744 kubelet[3000]: I0303 12:47:03.573701 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-ca-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:03.573744 kubelet[3000]: I0303 12:47:03.573736 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:03.573958 kubelet[3000]: I0303 12:47:03.573774 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:03.573958 kubelet[3000]: I0303 12:47:03.573814 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95dea088ba82c4475c23de6a1ed560ce-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-163\" (UID: \"95dea088ba82c4475c23de6a1ed560ce\") " pod="kube-system/kube-scheduler-ip-172-31-17-163" Mar 3 12:47:03.573958 kubelet[3000]: I0303 12:47:03.573852 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:03.573958 kubelet[3000]: I0303 12:47:03.573888 3000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:03.586996 kubelet[3000]: E0303 12:47:03.586923 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="800ms" Mar 3 12:47:03.595573 kubelet[3000]: I0303 12:47:03.595513 3000 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Mar 3 12:47:03.596018 kubelet[3000]: E0303 12:47:03.595946 3000 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Mar 3 12:47:03.771305 containerd[2015]: time="2026-03-03T12:47:03.771163650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-163,Uid:2496d34968fb25e7ec0567b58272df52,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:03.791027 containerd[2015]: time="2026-03-03T12:47:03.790952958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-163,Uid:b10003e62be3516f4563cfe4bb1cff27,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:03.810339 containerd[2015]: time="2026-03-03T12:47:03.809570826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-163,Uid:95dea088ba82c4475c23de6a1ed560ce,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:03.821215 containerd[2015]: time="2026-03-03T12:47:03.821074458Z" level=info msg="connecting to shim deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404" address="unix:///run/containerd/s/d4b4f18a63ecc1b587866b9e7afdeb46483b112a90d80c92add93fe3dea31930" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:03.848387 kubelet[3000]: E0303 12:47:03.848297 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 12:47:03.876743 containerd[2015]: time="2026-03-03T12:47:03.876677826Z" level=info msg="connecting to shim 211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db" address="unix:///run/containerd/s/e6b00b8ae182afd29970967ac85a8713fc34a87e5d33747759d66625c2e56578" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:03.899022 containerd[2015]: time="2026-03-03T12:47:03.898825375Z" level=info msg="connecting to shim 06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8" address="unix:///run/containerd/s/0ff30e89b316dd8345da99ee765ef43b31b3dbdbfd46efad9ac6dfb0e6f1ffcd" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:03.899646 kubelet[3000]: E0303 12:47:03.899569 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 12:47:03.900887 systemd[1]: Started cri-containerd-deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404.scope - libcontainer container deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404. Mar 3 12:47:03.975713 systemd[1]: Started cri-containerd-211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db.scope - libcontainer container 211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db. Mar 3 12:47:03.989421 systemd[1]: Started cri-containerd-06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8.scope - libcontainer container 06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8. Mar 3 12:47:04.004030 kubelet[3000]: I0303 12:47:04.003990 3000 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Mar 3 12:47:04.008181 kubelet[3000]: E0303 12:47:04.005546 3000 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.163:6443/api/v1/nodes\": dial tcp 172.31.17.163:6443: connect: connection refused" node="ip-172-31-17-163" Mar 3 12:47:04.049545 containerd[2015]: time="2026-03-03T12:47:04.049275867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-163,Uid:2496d34968fb25e7ec0567b58272df52,Namespace:kube-system,Attempt:0,} returns sandbox id \"deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404\"" Mar 3 12:47:04.064928 containerd[2015]: time="2026-03-03T12:47:04.064862463Z" level=info msg="CreateContainer within sandbox \"deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 12:47:04.068044 kubelet[3000]: E0303 12:47:04.067978 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-163&limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 12:47:04.089887 containerd[2015]: time="2026-03-03T12:47:04.089582488Z" level=info msg="Container 9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:04.119352 containerd[2015]: time="2026-03-03T12:47:04.119210008Z" level=info msg="CreateContainer within sandbox \"deb44c6251dea59cf69ce26b8d3bf98c2386cc0c1a0b680641569b36b5a83404\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0\"" Mar 3 12:47:04.120834 containerd[2015]: time="2026-03-03T12:47:04.120676156Z" level=info msg="StartContainer for \"9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0\"" Mar 3 12:47:04.130518 containerd[2015]: time="2026-03-03T12:47:04.130379764Z" level=info msg="connecting to shim 9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0" address="unix:///run/containerd/s/d4b4f18a63ecc1b587866b9e7afdeb46483b112a90d80c92add93fe3dea31930" protocol=ttrpc version=3 Mar 3 12:47:04.138273 containerd[2015]: time="2026-03-03T12:47:04.136719448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-163,Uid:b10003e62be3516f4563cfe4bb1cff27,Namespace:kube-system,Attempt:0,} returns sandbox id \"211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db\"" Mar 3 12:47:04.148916 containerd[2015]: time="2026-03-03T12:47:04.148832212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-163,Uid:95dea088ba82c4475c23de6a1ed560ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8\"" Mar 3 12:47:04.153471 containerd[2015]: time="2026-03-03T12:47:04.153369520Z" level=info msg="CreateContainer within sandbox \"211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 12:47:04.172791 containerd[2015]: time="2026-03-03T12:47:04.172643728Z" level=info msg="CreateContainer within sandbox \"06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 12:47:04.174824 systemd[1]: Started cri-containerd-9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0.scope - libcontainer container 9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0. Mar 3 12:47:04.180532 containerd[2015]: time="2026-03-03T12:47:04.180425860Z" level=info msg="Container b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:04.204975 containerd[2015]: time="2026-03-03T12:47:04.204905968Z" level=info msg="Container 446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:04.212124 containerd[2015]: time="2026-03-03T12:47:04.212040856Z" level=info msg="CreateContainer within sandbox \"211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825\"" Mar 3 12:47:04.213153 containerd[2015]: time="2026-03-03T12:47:04.213099112Z" level=info msg="StartContainer for \"b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825\"" Mar 3 12:47:04.218811 containerd[2015]: time="2026-03-03T12:47:04.218745232Z" level=info msg="connecting to shim b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825" address="unix:///run/containerd/s/e6b00b8ae182afd29970967ac85a8713fc34a87e5d33747759d66625c2e56578" protocol=ttrpc version=3 Mar 3 12:47:04.224629 containerd[2015]: time="2026-03-03T12:47:04.224306188Z" level=info msg="CreateContainer within sandbox \"06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7\"" Mar 3 12:47:04.226807 containerd[2015]: time="2026-03-03T12:47:04.226750168Z" level=info msg="StartContainer for \"446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7\"" Mar 3 12:47:04.229894 containerd[2015]: time="2026-03-03T12:47:04.229826224Z" level=info msg="connecting to shim 446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7" address="unix:///run/containerd/s/0ff30e89b316dd8345da99ee765ef43b31b3dbdbfd46efad9ac6dfb0e6f1ffcd" protocol=ttrpc version=3 Mar 3 12:47:04.271470 systemd[1]: Started cri-containerd-b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825.scope - libcontainer container b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825. Mar 3 12:47:04.296702 systemd[1]: Started cri-containerd-446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7.scope - libcontainer container 446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7. Mar 3 12:47:04.345787 containerd[2015]: time="2026-03-03T12:47:04.344484269Z" level=info msg="StartContainer for \"9331a969e040f4ceda677b5512a67c45ef1710b1d821541a1388fcc040e4b2c0\" returns successfully" Mar 3 12:47:04.391775 kubelet[3000]: E0303 12:47:04.391520 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": dial tcp 172.31.17.163:6443: connect: connection refused" interval="1.6s" Mar 3 12:47:04.428843 containerd[2015]: time="2026-03-03T12:47:04.428781557Z" level=info msg="StartContainer for \"b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825\" returns successfully" Mar 3 12:47:04.490260 kubelet[3000]: E0303 12:47:04.489631 3000 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 12:47:04.533730 containerd[2015]: time="2026-03-03T12:47:04.533595102Z" level=info msg="StartContainer for \"446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7\" returns successfully" Mar 3 12:47:04.811592 kubelet[3000]: I0303 12:47:04.811038 3000 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Mar 3 12:47:05.209813 kubelet[3000]: E0303 12:47:05.208961 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:05.216306 kubelet[3000]: E0303 12:47:05.215895 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:05.224215 kubelet[3000]: E0303 12:47:05.222925 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:06.226183 kubelet[3000]: E0303 12:47:06.225863 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:06.227282 kubelet[3000]: E0303 12:47:06.226892 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:06.228242 kubelet[3000]: E0303 12:47:06.228198 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:07.229445 kubelet[3000]: E0303 12:47:07.229411 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:07.230293 kubelet[3000]: E0303 12:47:07.229411 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:08.891870 kubelet[3000]: E0303 12:47:08.891749 3000 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:09.950050 kubelet[3000]: I0303 12:47:09.949998 3000 apiserver.go:52] "Watching apiserver" Mar 3 12:47:10.030191 kubelet[3000]: E0303 12:47:10.030059 3000 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-163\" not found" node="ip-172-31-17-163" Mar 3 12:47:10.067310 kubelet[3000]: I0303 12:47:10.067236 3000 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 3 12:47:10.129015 kubelet[3000]: I0303 12:47:10.128536 3000 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-163" Mar 3 12:47:10.167290 kubelet[3000]: I0303 12:47:10.167225 3000 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:10.218719 kubelet[3000]: E0303 12:47:10.218082 3000 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:10.218719 kubelet[3000]: I0303 12:47:10.218126 3000 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:10.229626 kubelet[3000]: E0303 12:47:10.229282 3000 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:10.229626 kubelet[3000]: I0303 12:47:10.229344 3000 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-163" Mar 3 12:47:10.242824 kubelet[3000]: E0303 12:47:10.242751 3000 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-163" Mar 3 12:47:12.414387 systemd[1]: Reload requested from client PID 3550 ('systemctl') (unit session-9.scope)... Mar 3 12:47:12.414427 systemd[1]: Reloading... Mar 3 12:47:12.650237 zram_generator::config[3603]: No configuration found. Mar 3 12:47:13.124826 systemd[1]: Reloading finished in 709 ms. Mar 3 12:47:13.175279 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:13.194017 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 12:47:13.194566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:13.194666 systemd[1]: kubelet.service: Consumed 2.446s CPU time, 129.2M memory peak. Mar 3 12:47:13.198970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:13.600027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:13.619755 (kubelet)[3654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:47:13.722737 kubelet[3654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:13.722737 kubelet[3654]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 12:47:13.722737 kubelet[3654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:13.725372 kubelet[3654]: I0303 12:47:13.722820 3654 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 12:47:13.739433 kubelet[3654]: I0303 12:47:13.739356 3654 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 3 12:47:13.739433 kubelet[3654]: I0303 12:47:13.739410 3654 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:47:13.740095 kubelet[3654]: I0303 12:47:13.740045 3654 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 12:47:13.746245 kubelet[3654]: I0303 12:47:13.746162 3654 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 12:47:13.752412 kubelet[3654]: I0303 12:47:13.751710 3654 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:47:13.762771 kubelet[3654]: I0303 12:47:13.762654 3654 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:47:13.777704 kubelet[3654]: I0303 12:47:13.777608 3654 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 3 12:47:13.778005 kubelet[3654]: I0303 12:47:13.777965 3654 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:47:13.778260 kubelet[3654]: I0303 12:47:13.778004 3654 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:47:13.778260 kubelet[3654]: I0303 12:47:13.778271 3654 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 12:47:13.778260 kubelet[3654]: I0303 12:47:13.778288 3654 container_manager_linux.go:303] "Creating device plugin manager" Mar 3 12:47:13.779210 kubelet[3654]: I0303 12:47:13.778365 3654 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:13.779210 kubelet[3654]: I0303 12:47:13.778614 3654 kubelet.go:480] "Attempting to sync node with API server" Mar 3 12:47:13.779210 kubelet[3654]: I0303 12:47:13.778648 3654 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:47:13.779210 kubelet[3654]: I0303 12:47:13.778694 3654 kubelet.go:386] "Adding apiserver pod source" Mar 3 12:47:13.779210 kubelet[3654]: I0303 12:47:13.778723 3654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:47:13.790294 kubelet[3654]: I0303 12:47:13.790255 3654 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:47:13.795162 kubelet[3654]: I0303 12:47:13.792456 3654 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:47:13.796595 sudo[3668]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 3 12:47:13.797240 sudo[3668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 3 12:47:13.813561 kubelet[3654]: I0303 12:47:13.813528 3654 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 3 12:47:13.813801 kubelet[3654]: I0303 12:47:13.813650 3654 server.go:1289] "Started kubelet" Mar 3 12:47:13.817256 kubelet[3654]: I0303 12:47:13.817095 3654 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:47:13.822556 kubelet[3654]: I0303 12:47:13.822409 3654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:47:13.823342 kubelet[3654]: I0303 12:47:13.823302 3654 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:47:13.825630 kubelet[3654]: I0303 12:47:13.825573 3654 server.go:317] "Adding debug handlers to kubelet server" Mar 3 12:47:13.835202 kubelet[3654]: I0303 12:47:13.835010 3654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 12:47:13.849350 kubelet[3654]: I0303 12:47:13.849303 3654 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 3 12:47:13.853243 kubelet[3654]: E0303 12:47:13.850515 3654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:13.854547 kubelet[3654]: I0303 12:47:13.854516 3654 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 3 12:47:13.862057 kubelet[3654]: I0303 12:47:13.861988 3654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:47:13.877167 kubelet[3654]: I0303 12:47:13.876628 3654 reconciler.go:26] "Reconciler: start to sync state" Mar 3 12:47:13.923807 kubelet[3654]: I0303 12:47:13.921786 3654 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:47:13.929116 kubelet[3654]: I0303 12:47:13.929063 3654 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:47:13.931837 kubelet[3654]: E0303 12:47:13.931663 3654 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 12:47:13.941741 kubelet[3654]: I0303 12:47:13.941662 3654 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:47:13.951527 kubelet[3654]: I0303 12:47:13.951461 3654 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 3 12:47:13.957573 kubelet[3654]: E0303 12:47:13.957429 3654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-163\" not found" Mar 3 12:47:13.970158 kubelet[3654]: I0303 12:47:13.969978 3654 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 3 12:47:13.970158 kubelet[3654]: I0303 12:47:13.970036 3654 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 3 12:47:13.970158 kubelet[3654]: I0303 12:47:13.970074 3654 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:47:13.970158 kubelet[3654]: I0303 12:47:13.970088 3654 kubelet.go:2436] "Starting kubelet main sync loop" Mar 3 12:47:13.971309 kubelet[3654]: E0303 12:47:13.970540 3654 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:47:14.072167 kubelet[3654]: E0303 12:47:14.071713 3654 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 12:47:14.241125 kubelet[3654]: I0303 12:47:14.241088 3654 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 12:47:14.241344 kubelet[3654]: I0303 12:47:14.241319 3654 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 12:47:14.241456 kubelet[3654]: I0303 12:47:14.241439 3654 state_mem.go:36] "Initialized new in-memory state store" Mar 3 12:47:14.241779 kubelet[3654]: I0303 12:47:14.241742 3654 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 12:47:14.241918 kubelet[3654]: I0303 12:47:14.241877 3654 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 12:47:14.243108 kubelet[3654]: I0303 12:47:14.241995 3654 policy_none.go:49] "None policy: Start" Mar 3 12:47:14.243108 kubelet[3654]: I0303 12:47:14.242024 3654 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 3 12:47:14.243108 kubelet[3654]: I0303 12:47:14.242047 3654 state_mem.go:35] "Initializing new in-memory state store" Mar 3 12:47:14.243541 kubelet[3654]: I0303 12:47:14.243516 3654 state_mem.go:75] "Updated machine memory state" Mar 3 12:47:14.257654 kubelet[3654]: E0303 12:47:14.257583 3654 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:47:14.258236 kubelet[3654]: I0303 12:47:14.258209 3654 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 12:47:14.259121 kubelet[3654]: I0303 12:47:14.258410 3654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:47:14.260975 kubelet[3654]: I0303 12:47:14.260920 3654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 12:47:14.266205 kubelet[3654]: E0303 12:47:14.266167 3654 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:47:14.277333 kubelet[3654]: I0303 12:47:14.277259 3654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-163" Mar 3 12:47:14.278910 kubelet[3654]: I0303 12:47:14.277926 3654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.281164 kubelet[3654]: I0303 12:47:14.280982 3654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:14.390486 kubelet[3654]: I0303 12:47:14.390421 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:14.390653 kubelet[3654]: I0303 12:47:14.390490 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.390653 kubelet[3654]: I0303 12:47:14.390543 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.390653 kubelet[3654]: I0303 12:47:14.390580 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.390806 kubelet[3654]: I0303 12:47:14.390653 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.390806 kubelet[3654]: I0303 12:47:14.390690 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-ca-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:14.390806 kubelet[3654]: I0303 12:47:14.390732 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2496d34968fb25e7ec0567b58272df52-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-163\" (UID: \"2496d34968fb25e7ec0567b58272df52\") " pod="kube-system/kube-apiserver-ip-172-31-17-163" Mar 3 12:47:14.390806 kubelet[3654]: I0303 12:47:14.390770 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b10003e62be3516f4563cfe4bb1cff27-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-163\" (UID: \"b10003e62be3516f4563cfe4bb1cff27\") " pod="kube-system/kube-controller-manager-ip-172-31-17-163" Mar 3 12:47:14.391004 kubelet[3654]: I0303 12:47:14.390807 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95dea088ba82c4475c23de6a1ed560ce-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-163\" (UID: \"95dea088ba82c4475c23de6a1ed560ce\") " pod="kube-system/kube-scheduler-ip-172-31-17-163" Mar 3 12:47:14.394747 kubelet[3654]: I0303 12:47:14.394701 3654 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-163" Mar 3 12:47:14.414420 kubelet[3654]: I0303 12:47:14.414056 3654 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-163" Mar 3 12:47:14.415698 kubelet[3654]: I0303 12:47:14.415516 3654 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-163" Mar 3 12:47:14.641204 sudo[3668]: pam_unix(sudo:session): session closed for user root Mar 3 12:47:14.786233 kubelet[3654]: I0303 12:47:14.785713 3654 apiserver.go:52] "Watching apiserver" Mar 3 12:47:14.855301 kubelet[3654]: I0303 12:47:14.855231 3654 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 3 12:47:15.213475 kubelet[3654]: I0303 12:47:15.213381 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-163" podStartSLOduration=1.213361263 podStartE2EDuration="1.213361263s" podCreationTimestamp="2026-03-03 12:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:15.212477031 +0000 UTC m=+1.581826893" watchObservedRunningTime="2026-03-03 12:47:15.213361263 +0000 UTC m=+1.582711101" Mar 3 12:47:15.214032 kubelet[3654]: I0303 12:47:15.213546 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-163" podStartSLOduration=1.213535707 podStartE2EDuration="1.213535707s" podCreationTimestamp="2026-03-03 12:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:15.194547975 +0000 UTC m=+1.563897825" watchObservedRunningTime="2026-03-03 12:47:15.213535707 +0000 UTC m=+1.582885521" Mar 3 12:47:15.255286 kubelet[3654]: I0303 12:47:15.255202 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-163" podStartSLOduration=1.255180123 podStartE2EDuration="1.255180123s" podCreationTimestamp="2026-03-03 12:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:15.233821587 +0000 UTC m=+1.603171437" watchObservedRunningTime="2026-03-03 12:47:15.255180123 +0000 UTC m=+1.624529961" Mar 3 12:47:16.870106 kubelet[3654]: I0303 12:47:16.869625 3654 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 12:47:16.873682 containerd[2015]: time="2026-03-03T12:47:16.872831707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 12:47:16.876582 kubelet[3654]: I0303 12:47:16.875688 3654 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 12:47:17.194300 sudo[2409]: pam_unix(sudo:session): session closed for user root Mar 3 12:47:17.272487 sshd[2408]: Connection closed by 20.161.92.111 port 60660 Mar 3 12:47:17.273496 sshd-session[2405]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:17.282298 systemd[1]: sshd@8-172.31.17.163:22-20.161.92.111:60660.service: Deactivated successfully. Mar 3 12:47:17.287503 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 12:47:17.289336 systemd[1]: session-9.scope: Consumed 12.396s CPU time, 266.2M memory peak. Mar 3 12:47:17.296439 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Mar 3 12:47:17.298858 systemd-logind[1990]: Removed session 9. Mar 3 12:47:17.393952 systemd[1]: Created slice kubepods-besteffort-pod03c27049_e227_47e0_a287_d6686cff0657.slice - libcontainer container kubepods-besteffort-pod03c27049_e227_47e0_a287_d6686cff0657.slice. Mar 3 12:47:17.452705 systemd[1]: Created slice kubepods-burstable-pod4e2d770e_339c_407d_8504_9dba62c5b666.slice - libcontainer container kubepods-burstable-pod4e2d770e_339c_407d_8504_9dba62c5b666.slice. Mar 3 12:47:17.509884 kubelet[3654]: I0303 12:47:17.509840 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03c27049-e227-47e0-a287-d6686cff0657-xtables-lock\") pod \"kube-proxy-7drl5\" (UID: \"03c27049-e227-47e0-a287-d6686cff0657\") " pod="kube-system/kube-proxy-7drl5" Mar 3 12:47:17.510220 kubelet[3654]: I0303 12:47:17.510184 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03c27049-e227-47e0-a287-d6686cff0657-lib-modules\") pod \"kube-proxy-7drl5\" (UID: \"03c27049-e227-47e0-a287-d6686cff0657\") " pod="kube-system/kube-proxy-7drl5" Mar 3 12:47:17.510537 kubelet[3654]: I0303 12:47:17.510476 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-config-path\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.511230 kubelet[3654]: I0303 12:47:17.511130 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-net\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511514 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-cgroup\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511576 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-xtables-lock\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511612 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e2d770e-339c-407d-8504-9dba62c5b666-clustermesh-secrets\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511647 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-hubble-tls\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511709 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4djt\" (UniqueName: \"kubernetes.io/projected/03c27049-e227-47e0-a287-d6686cff0657-kube-api-access-h4djt\") pod \"kube-proxy-7drl5\" (UID: \"03c27049-e227-47e0-a287-d6686cff0657\") " pod="kube-system/kube-proxy-7drl5" Mar 3 12:47:17.512697 kubelet[3654]: I0303 12:47:17.511750 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-bpf-maps\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511789 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-kernel\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511828 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03c27049-e227-47e0-a287-d6686cff0657-kube-proxy\") pod \"kube-proxy-7drl5\" (UID: \"03c27049-e227-47e0-a287-d6686cff0657\") " pod="kube-system/kube-proxy-7drl5" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511863 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-hostproc\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511914 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cni-path\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511950 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-etc-cni-netd\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513108 kubelet[3654]: I0303 12:47:17.511983 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-lib-modules\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513443 kubelet[3654]: I0303 12:47:17.512020 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nt7z\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-kube-api-access-4nt7z\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.513443 kubelet[3654]: I0303 12:47:17.512053 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-run\") pod \"cilium-27g47\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " pod="kube-system/cilium-27g47" Mar 3 12:47:17.713483 containerd[2015]: time="2026-03-03T12:47:17.713283799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7drl5,Uid:03c27049-e227-47e0-a287-d6686cff0657,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:17.765178 containerd[2015]: time="2026-03-03T12:47:17.764017447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27g47,Uid:4e2d770e-339c-407d-8504-9dba62c5b666,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:17.770197 containerd[2015]: time="2026-03-03T12:47:17.769740019Z" level=info msg="connecting to shim 9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6" address="unix:///run/containerd/s/cd62c4328a2f278b4c8b238da64375b7b3336a48e6a93dd6ee5f3bbb713e538e" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:17.833684 containerd[2015]: time="2026-03-03T12:47:17.833611292Z" level=info msg="connecting to shim d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:17.856782 systemd[1]: Started cri-containerd-9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6.scope - libcontainer container 9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6. Mar 3 12:47:17.950396 systemd[1]: Started cri-containerd-d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b.scope - libcontainer container d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b. Mar 3 12:47:18.117394 kubelet[3654]: I0303 12:47:18.116638 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqws\" (UniqueName: \"kubernetes.io/projected/18a27539-2792-4802-81f2-44e7006ce455-kube-api-access-pxqws\") pod \"cilium-operator-6c4d7847fc-8gmfg\" (UID: \"18a27539-2792-4802-81f2-44e7006ce455\") " pod="kube-system/cilium-operator-6c4d7847fc-8gmfg" Mar 3 12:47:18.120341 kubelet[3654]: I0303 12:47:18.117675 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18a27539-2792-4802-81f2-44e7006ce455-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8gmfg\" (UID: \"18a27539-2792-4802-81f2-44e7006ce455\") " pod="kube-system/cilium-operator-6c4d7847fc-8gmfg" Mar 3 12:47:18.122904 containerd[2015]: time="2026-03-03T12:47:18.122826761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27g47,Uid:4e2d770e-339c-407d-8504-9dba62c5b666,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\"" Mar 3 12:47:18.131077 systemd[1]: Created slice kubepods-besteffort-pod18a27539_2792_4802_81f2_44e7006ce455.slice - libcontainer container kubepods-besteffort-pod18a27539_2792_4802_81f2_44e7006ce455.slice. Mar 3 12:47:18.135718 containerd[2015]: time="2026-03-03T12:47:18.132889781Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 3 12:47:18.220518 containerd[2015]: time="2026-03-03T12:47:18.220079322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7drl5,Uid:03c27049-e227-47e0-a287-d6686cff0657,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6\"" Mar 3 12:47:18.243058 containerd[2015]: time="2026-03-03T12:47:18.242991726Z" level=info msg="CreateContainer within sandbox \"9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 12:47:18.269173 containerd[2015]: time="2026-03-03T12:47:18.268704918Z" level=info msg="Container 34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:18.289930 containerd[2015]: time="2026-03-03T12:47:18.289878306Z" level=info msg="CreateContainer within sandbox \"9ec8cda82b09af603b383218948e298712aaa0393a235f5e1931b2486dc12ea6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b\"" Mar 3 12:47:18.291698 containerd[2015]: time="2026-03-03T12:47:18.291625458Z" level=info msg="StartContainer for \"34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b\"" Mar 3 12:47:18.295110 containerd[2015]: time="2026-03-03T12:47:18.294942270Z" level=info msg="connecting to shim 34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b" address="unix:///run/containerd/s/cd62c4328a2f278b4c8b238da64375b7b3336a48e6a93dd6ee5f3bbb713e538e" protocol=ttrpc version=3 Mar 3 12:47:18.332449 systemd[1]: Started cri-containerd-34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b.scope - libcontainer container 34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b. Mar 3 12:47:18.443486 containerd[2015]: time="2026-03-03T12:47:18.443229067Z" level=info msg="StartContainer for \"34cf4953bf22f8d243b229b159c4fac3f00e659bdf45b10347e56a45c771b37b\" returns successfully" Mar 3 12:47:18.446515 containerd[2015]: time="2026-03-03T12:47:18.446449903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8gmfg,Uid:18a27539-2792-4802-81f2-44e7006ce455,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:18.489739 containerd[2015]: time="2026-03-03T12:47:18.489615739Z" level=info msg="connecting to shim 6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2" address="unix:///run/containerd/s/ba8725a305f06212a1340d57fe3d8a331b3cff19909fd8d753ee45e0f15fd39b" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:18.542365 systemd[1]: Started cri-containerd-6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2.scope - libcontainer container 6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2. Mar 3 12:47:18.663016 containerd[2015]: time="2026-03-03T12:47:18.658114856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8gmfg,Uid:18a27539-2792-4802-81f2-44e7006ce455,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\"" Mar 3 12:47:19.186918 kubelet[3654]: I0303 12:47:19.186645 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7drl5" podStartSLOduration=2.186598399 podStartE2EDuration="2.186598399s" podCreationTimestamp="2026-03-03 12:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:19.185297011 +0000 UTC m=+5.554646837" watchObservedRunningTime="2026-03-03 12:47:19.186598399 +0000 UTC m=+5.555948237" Mar 3 12:47:24.517105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358360013.mount: Deactivated successfully. Mar 3 12:47:27.041204 containerd[2015]: time="2026-03-03T12:47:27.040543310Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:27.043012 containerd[2015]: time="2026-03-03T12:47:27.042956618Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 3 12:47:27.046187 containerd[2015]: time="2026-03-03T12:47:27.045787466Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:27.048911 containerd[2015]: time="2026-03-03T12:47:27.048861782Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.912626701s" Mar 3 12:47:27.049107 containerd[2015]: time="2026-03-03T12:47:27.049054610Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 3 12:47:27.052520 containerd[2015]: time="2026-03-03T12:47:27.051644678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 3 12:47:27.058873 containerd[2015]: time="2026-03-03T12:47:27.058821122Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 12:47:27.074852 containerd[2015]: time="2026-03-03T12:47:27.074802350Z" level=info msg="Container ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:27.092102 containerd[2015]: time="2026-03-03T12:47:27.091936862Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\"" Mar 3 12:47:27.093368 containerd[2015]: time="2026-03-03T12:47:27.093303050Z" level=info msg="StartContainer for \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\"" Mar 3 12:47:27.096068 containerd[2015]: time="2026-03-03T12:47:27.095998274Z" level=info msg="connecting to shim ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" protocol=ttrpc version=3 Mar 3 12:47:27.139463 systemd[1]: Started cri-containerd-ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852.scope - libcontainer container ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852. Mar 3 12:47:27.214289 containerd[2015]: time="2026-03-03T12:47:27.214242242Z" level=info msg="StartContainer for \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" returns successfully" Mar 3 12:47:27.244077 systemd[1]: cri-containerd-ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852.scope: Deactivated successfully. Mar 3 12:47:27.249329 containerd[2015]: time="2026-03-03T12:47:27.249076767Z" level=info msg="received container exit event container_id:\"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" id:\"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" pid:4082 exited_at:{seconds:1772542047 nanos:248190591}" Mar 3 12:47:27.287827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852-rootfs.mount: Deactivated successfully. Mar 3 12:47:28.653459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179025660.mount: Deactivated successfully. Mar 3 12:47:29.234229 containerd[2015]: time="2026-03-03T12:47:29.233531500Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 12:47:29.257171 containerd[2015]: time="2026-03-03T12:47:29.254659769Z" level=info msg="Container d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:29.277630 containerd[2015]: time="2026-03-03T12:47:29.277573601Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\"" Mar 3 12:47:29.279050 containerd[2015]: time="2026-03-03T12:47:29.278991905Z" level=info msg="StartContainer for \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\"" Mar 3 12:47:29.282574 containerd[2015]: time="2026-03-03T12:47:29.282416273Z" level=info msg="connecting to shim d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" protocol=ttrpc version=3 Mar 3 12:47:29.329519 systemd[1]: Started cri-containerd-d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc.scope - libcontainer container d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc. Mar 3 12:47:29.389736 containerd[2015]: time="2026-03-03T12:47:29.389689805Z" level=info msg="StartContainer for \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" returns successfully" Mar 3 12:47:29.418781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 12:47:29.419344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:47:29.419649 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:47:29.423718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:47:29.429749 systemd[1]: cri-containerd-d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc.scope: Deactivated successfully. Mar 3 12:47:29.435098 containerd[2015]: time="2026-03-03T12:47:29.435036821Z" level=info msg="received container exit event container_id:\"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" id:\"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" pid:4138 exited_at:{seconds:1772542049 nanos:431910809}" Mar 3 12:47:29.469954 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:47:29.641308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc-rootfs.mount: Deactivated successfully. Mar 3 12:47:30.146558 containerd[2015]: time="2026-03-03T12:47:30.146469953Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:30.148798 containerd[2015]: time="2026-03-03T12:47:30.148413905Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 3 12:47:30.150929 containerd[2015]: time="2026-03-03T12:47:30.150875405Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:30.153646 containerd[2015]: time="2026-03-03T12:47:30.153581057Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.101877003s" Mar 3 12:47:30.153795 containerd[2015]: time="2026-03-03T12:47:30.153643973Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 3 12:47:30.163744 containerd[2015]: time="2026-03-03T12:47:30.163647065Z" level=info msg="CreateContainer within sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 3 12:47:30.191366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943795336.mount: Deactivated successfully. Mar 3 12:47:30.195638 containerd[2015]: time="2026-03-03T12:47:30.191323877Z" level=info msg="Container f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:30.200077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3557849538.mount: Deactivated successfully. Mar 3 12:47:30.219405 containerd[2015]: time="2026-03-03T12:47:30.219167669Z" level=info msg="CreateContainer within sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\"" Mar 3 12:47:30.220253 containerd[2015]: time="2026-03-03T12:47:30.220211681Z" level=info msg="StartContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\"" Mar 3 12:47:30.221950 containerd[2015]: time="2026-03-03T12:47:30.221895917Z" level=info msg="connecting to shim f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee" address="unix:///run/containerd/s/ba8725a305f06212a1340d57fe3d8a331b3cff19909fd8d753ee45e0f15fd39b" protocol=ttrpc version=3 Mar 3 12:47:30.265532 containerd[2015]: time="2026-03-03T12:47:30.265482006Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 12:47:30.300060 systemd[1]: Started cri-containerd-f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee.scope - libcontainer container f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee. Mar 3 12:47:30.308314 containerd[2015]: time="2026-03-03T12:47:30.308260014Z" level=info msg="Container e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:30.331700 containerd[2015]: time="2026-03-03T12:47:30.331620534Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\"" Mar 3 12:47:30.333287 containerd[2015]: time="2026-03-03T12:47:30.332470278Z" level=info msg="StartContainer for \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\"" Mar 3 12:47:30.346752 containerd[2015]: time="2026-03-03T12:47:30.346679250Z" level=info msg="connecting to shim e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" protocol=ttrpc version=3 Mar 3 12:47:30.398432 containerd[2015]: time="2026-03-03T12:47:30.397489830Z" level=info msg="StartContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" returns successfully" Mar 3 12:47:30.399475 systemd[1]: Started cri-containerd-e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201.scope - libcontainer container e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201. Mar 3 12:47:30.525558 containerd[2015]: time="2026-03-03T12:47:30.525437479Z" level=info msg="StartContainer for \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" returns successfully" Mar 3 12:47:30.530312 systemd[1]: cri-containerd-e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201.scope: Deactivated successfully. Mar 3 12:47:30.539109 containerd[2015]: time="2026-03-03T12:47:30.539058067Z" level=info msg="received container exit event container_id:\"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" id:\"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" pid:4221 exited_at:{seconds:1772542050 nanos:537911587}" Mar 3 12:47:31.277618 containerd[2015]: time="2026-03-03T12:47:31.277564087Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 12:47:31.306213 containerd[2015]: time="2026-03-03T12:47:31.305462731Z" level=info msg="Container 6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:31.337107 containerd[2015]: time="2026-03-03T12:47:31.337034359Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\"" Mar 3 12:47:31.342170 containerd[2015]: time="2026-03-03T12:47:31.341231599Z" level=info msg="StartContainer for \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\"" Mar 3 12:47:31.343300 containerd[2015]: time="2026-03-03T12:47:31.343249687Z" level=info msg="connecting to shim 6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" protocol=ttrpc version=3 Mar 3 12:47:31.408435 systemd[1]: Started cri-containerd-6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5.scope - libcontainer container 6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5. Mar 3 12:47:31.549833 containerd[2015]: time="2026-03-03T12:47:31.546619388Z" level=info msg="StartContainer for \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" returns successfully" Mar 3 12:47:31.547262 systemd[1]: cri-containerd-6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5.scope: Deactivated successfully. Mar 3 12:47:31.553745 containerd[2015]: time="2026-03-03T12:47:31.553528364Z" level=info msg="received container exit event container_id:\"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" id:\"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" pid:4264 exited_at:{seconds:1772542051 nanos:552082100}" Mar 3 12:47:31.620984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5-rootfs.mount: Deactivated successfully. Mar 3 12:47:32.310640 containerd[2015]: time="2026-03-03T12:47:32.309935084Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 12:47:32.364549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785362663.mount: Deactivated successfully. Mar 3 12:47:32.371179 kubelet[3654]: I0303 12:47:32.368122 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8gmfg" podStartSLOduration=2.882219107 podStartE2EDuration="14.367248944s" podCreationTimestamp="2026-03-03 12:47:18 +0000 UTC" firstStartedPulling="2026-03-03 12:47:18.669970904 +0000 UTC m=+5.039320730" lastFinishedPulling="2026-03-03 12:47:30.155000753 +0000 UTC m=+16.524350567" observedRunningTime="2026-03-03 12:47:31.584217296 +0000 UTC m=+17.953567146" watchObservedRunningTime="2026-03-03 12:47:32.367248944 +0000 UTC m=+18.736598782" Mar 3 12:47:32.373670 containerd[2015]: time="2026-03-03T12:47:32.372583592Z" level=info msg="Container 1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:32.388944 containerd[2015]: time="2026-03-03T12:47:32.388875980Z" level=info msg="CreateContainer within sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\"" Mar 3 12:47:32.390001 containerd[2015]: time="2026-03-03T12:47:32.389947784Z" level=info msg="StartContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\"" Mar 3 12:47:32.392283 containerd[2015]: time="2026-03-03T12:47:32.392121416Z" level=info msg="connecting to shim 1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782" address="unix:///run/containerd/s/4cbefd446cefd9d946180ec48f04deb21a85b9c7633d910898f2b84f38334301" protocol=ttrpc version=3 Mar 3 12:47:32.434437 systemd[1]: Started cri-containerd-1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782.scope - libcontainer container 1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782. Mar 3 12:47:32.527554 containerd[2015]: time="2026-03-03T12:47:32.526630293Z" level=info msg="StartContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" returns successfully" Mar 3 12:47:32.699260 kubelet[3654]: I0303 12:47:32.699081 3654 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 3 12:47:32.775051 systemd[1]: Created slice kubepods-burstable-poded09e98f_8302_4ec0_8360_66d38b5ee5a3.slice - libcontainer container kubepods-burstable-poded09e98f_8302_4ec0_8360_66d38b5ee5a3.slice. Mar 3 12:47:32.784212 kubelet[3654]: I0303 12:47:32.782552 3654 status_manager.go:895] "Failed to get status for pod" podUID="ed09e98f-8302-4ec0-8360-66d38b5ee5a3" pod="kube-system/coredns-674b8bbfcf-z98kl" err="pods \"coredns-674b8bbfcf-z98kl\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-163' and this object" Mar 3 12:47:32.787189 kubelet[3654]: E0303 12:47:32.786742 3654 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-17-163\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-163' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Mar 3 12:47:32.802857 systemd[1]: Created slice kubepods-burstable-pod99a30316_ed04_4836_83ea_daacee7eb6b1.slice - libcontainer container kubepods-burstable-pod99a30316_ed04_4836_83ea_daacee7eb6b1.slice. Mar 3 12:47:32.845349 kubelet[3654]: I0303 12:47:32.845227 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed09e98f-8302-4ec0-8360-66d38b5ee5a3-config-volume\") pod \"coredns-674b8bbfcf-z98kl\" (UID: \"ed09e98f-8302-4ec0-8360-66d38b5ee5a3\") " pod="kube-system/coredns-674b8bbfcf-z98kl" Mar 3 12:47:32.845692 kubelet[3654]: I0303 12:47:32.845524 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99a30316-ed04-4836-83ea-daacee7eb6b1-config-volume\") pod \"coredns-674b8bbfcf-2dvzl\" (UID: \"99a30316-ed04-4836-83ea-daacee7eb6b1\") " pod="kube-system/coredns-674b8bbfcf-2dvzl" Mar 3 12:47:32.845692 kubelet[3654]: I0303 12:47:32.845623 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdm9\" (UniqueName: \"kubernetes.io/projected/ed09e98f-8302-4ec0-8360-66d38b5ee5a3-kube-api-access-4gdm9\") pod \"coredns-674b8bbfcf-z98kl\" (UID: \"ed09e98f-8302-4ec0-8360-66d38b5ee5a3\") " pod="kube-system/coredns-674b8bbfcf-z98kl" Mar 3 12:47:32.845911 kubelet[3654]: I0303 12:47:32.845834 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drrcr\" (UniqueName: \"kubernetes.io/projected/99a30316-ed04-4836-83ea-daacee7eb6b1-kube-api-access-drrcr\") pod \"coredns-674b8bbfcf-2dvzl\" (UID: \"99a30316-ed04-4836-83ea-daacee7eb6b1\") " pod="kube-system/coredns-674b8bbfcf-2dvzl" Mar 3 12:47:33.341169 kubelet[3654]: I0303 12:47:33.341023 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-27g47" podStartSLOduration=7.421920096 podStartE2EDuration="16.341001897s" podCreationTimestamp="2026-03-03 12:47:17 +0000 UTC" firstStartedPulling="2026-03-03 12:47:18.131573069 +0000 UTC m=+4.500922883" lastFinishedPulling="2026-03-03 12:47:27.050654786 +0000 UTC m=+13.420004684" observedRunningTime="2026-03-03 12:47:33.337468437 +0000 UTC m=+19.706818275" watchObservedRunningTime="2026-03-03 12:47:33.341001897 +0000 UTC m=+19.710351747" Mar 3 12:47:33.709824 containerd[2015]: time="2026-03-03T12:47:33.709737323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dvzl,Uid:99a30316-ed04-4836-83ea-daacee7eb6b1,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:33.992375 containerd[2015]: time="2026-03-03T12:47:33.991846968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z98kl,Uid:ed09e98f-8302-4ec0-8360-66d38b5ee5a3,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:35.766005 (udev-worker)[4431]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:35.771653 (udev-worker)[4384]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:35.772850 systemd-networkd[1892]: cilium_host: Link UP Mar 3 12:47:35.776376 systemd-networkd[1892]: cilium_net: Link UP Mar 3 12:47:35.776802 systemd-networkd[1892]: cilium_net: Gained carrier Mar 3 12:47:35.777128 systemd-networkd[1892]: cilium_host: Gained carrier Mar 3 12:47:35.959676 systemd-networkd[1892]: cilium_vxlan: Link UP Mar 3 12:47:35.959971 systemd-networkd[1892]: cilium_vxlan: Gained carrier Mar 3 12:47:36.236404 systemd-networkd[1892]: cilium_net: Gained IPv6LL Mar 3 12:47:36.356974 systemd-networkd[1892]: cilium_host: Gained IPv6LL Mar 3 12:47:36.532176 kernel: NET: Registered PF_ALG protocol family Mar 3 12:47:37.572448 systemd-networkd[1892]: cilium_vxlan: Gained IPv6LL Mar 3 12:47:37.896907 (udev-worker)[4437]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:37.930537 systemd-networkd[1892]: lxc_health: Link UP Mar 3 12:47:37.934551 systemd-networkd[1892]: lxc_health: Gained carrier Mar 3 12:47:38.292519 systemd-networkd[1892]: lxca670c76aaea1: Link UP Mar 3 12:47:38.301195 kernel: eth0: renamed from tmp644f2 Mar 3 12:47:38.303622 systemd-networkd[1892]: lxca670c76aaea1: Gained carrier Mar 3 12:47:38.549030 systemd-networkd[1892]: lxc69a30320bf7f: Link UP Mar 3 12:47:38.554221 kernel: eth0: renamed from tmpe4d19 Mar 3 12:47:38.555211 systemd-networkd[1892]: lxc69a30320bf7f: Gained carrier Mar 3 12:47:39.364493 systemd-networkd[1892]: lxca670c76aaea1: Gained IPv6LL Mar 3 12:47:39.429408 systemd-networkd[1892]: lxc_health: Gained IPv6LL Mar 3 12:47:40.068406 systemd-networkd[1892]: lxc69a30320bf7f: Gained IPv6LL Mar 3 12:47:42.622919 ntpd[2198]: Listen normally on 6 cilium_host 192.168.0.184:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 6 cilium_host 192.168.0.184:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 7 cilium_net [fe80::d4e9:36ff:fec6:8215%4]:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 8 cilium_host [fe80::fc9d:73ff:fe6d:3c84%5]:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 9 cilium_vxlan [fe80::34c3:6ff:fe3b:8bb8%6]:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 10 lxc_health [fe80::f051:94ff:fe9f:fd84%8]:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 11 lxca670c76aaea1 [fe80::e037:a0ff:fe11:d7e3%10]:123 Mar 3 12:47:42.624992 ntpd[2198]: 3 Mar 12:47:42 ntpd[2198]: Listen normally on 12 lxc69a30320bf7f [fe80::6013:9ff:feeb:410b%12]:123 Mar 3 12:47:42.623016 ntpd[2198]: Listen normally on 7 cilium_net [fe80::d4e9:36ff:fec6:8215%4]:123 Mar 3 12:47:42.623065 ntpd[2198]: Listen normally on 8 cilium_host [fe80::fc9d:73ff:fe6d:3c84%5]:123 Mar 3 12:47:42.623110 ntpd[2198]: Listen normally on 9 cilium_vxlan [fe80::34c3:6ff:fe3b:8bb8%6]:123 Mar 3 12:47:42.623204 ntpd[2198]: Listen normally on 10 lxc_health [fe80::f051:94ff:fe9f:fd84%8]:123 Mar 3 12:47:42.623252 ntpd[2198]: Listen normally on 11 lxca670c76aaea1 [fe80::e037:a0ff:fe11:d7e3%10]:123 Mar 3 12:47:42.623299 ntpd[2198]: Listen normally on 12 lxc69a30320bf7f [fe80::6013:9ff:feeb:410b%12]:123 Mar 3 12:47:46.796784 containerd[2015]: time="2026-03-03T12:47:46.796695120Z" level=info msg="connecting to shim e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26" address="unix:///run/containerd/s/aa9b2e5da64aad7e0c51f68df4bfc5167e6b3d145c3a8d9d39947a7e83eca513" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:46.808174 containerd[2015]: time="2026-03-03T12:47:46.806892924Z" level=info msg="connecting to shim 644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53" address="unix:///run/containerd/s/314d5ef5202c3dee3de4834df75a1bbb687f93654b335225540c9ea237603c13" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:46.896820 systemd[1]: Started cri-containerd-644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53.scope - libcontainer container 644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53. Mar 3 12:47:46.910981 systemd[1]: Started cri-containerd-e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26.scope - libcontainer container e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26. Mar 3 12:47:47.032908 containerd[2015]: time="2026-03-03T12:47:47.032741937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dvzl,Uid:99a30316-ed04-4836-83ea-daacee7eb6b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53\"" Mar 3 12:47:47.048647 containerd[2015]: time="2026-03-03T12:47:47.048211821Z" level=info msg="CreateContainer within sandbox \"644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:47:47.079938 containerd[2015]: time="2026-03-03T12:47:47.078767253Z" level=info msg="Container 81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:47.094345 containerd[2015]: time="2026-03-03T12:47:47.094269957Z" level=info msg="CreateContainer within sandbox \"644f286b7763f7d3962f54ec0e60f50ac79f6a2b26fba1a9c2ef81607d316f53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160\"" Mar 3 12:47:47.097622 containerd[2015]: time="2026-03-03T12:47:47.097537725Z" level=info msg="StartContainer for \"81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160\"" Mar 3 12:47:47.103669 containerd[2015]: time="2026-03-03T12:47:47.101812533Z" level=info msg="connecting to shim 81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160" address="unix:///run/containerd/s/314d5ef5202c3dee3de4834df75a1bbb687f93654b335225540c9ea237603c13" protocol=ttrpc version=3 Mar 3 12:47:47.125314 containerd[2015]: time="2026-03-03T12:47:47.125249553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z98kl,Uid:ed09e98f-8302-4ec0-8360-66d38b5ee5a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26\"" Mar 3 12:47:47.145185 containerd[2015]: time="2026-03-03T12:47:47.144298449Z" level=info msg="CreateContainer within sandbox \"e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:47:47.159941 systemd[1]: Started cri-containerd-81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160.scope - libcontainer container 81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160. Mar 3 12:47:47.173127 containerd[2015]: time="2026-03-03T12:47:47.172979566Z" level=info msg="Container 2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:47.190024 containerd[2015]: time="2026-03-03T12:47:47.189706822Z" level=info msg="CreateContainer within sandbox \"e4d19123dd8c85e0c592bd085494e379c84e0f9e2e39013d035001c78c2eeb26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189\"" Mar 3 12:47:47.191932 containerd[2015]: time="2026-03-03T12:47:47.191710966Z" level=info msg="StartContainer for \"2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189\"" Mar 3 12:47:47.196434 containerd[2015]: time="2026-03-03T12:47:47.196272862Z" level=info msg="connecting to shim 2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189" address="unix:///run/containerd/s/aa9b2e5da64aad7e0c51f68df4bfc5167e6b3d145c3a8d9d39947a7e83eca513" protocol=ttrpc version=3 Mar 3 12:47:47.256245 systemd[1]: Started cri-containerd-2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189.scope - libcontainer container 2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189. Mar 3 12:47:47.264810 containerd[2015]: time="2026-03-03T12:47:47.264677074Z" level=info msg="StartContainer for \"81d55e0aad76538a9c2f0b6c93a940740a26d0915d7b147366517eeda6175160\" returns successfully" Mar 3 12:47:47.332571 containerd[2015]: time="2026-03-03T12:47:47.331783186Z" level=info msg="StartContainer for \"2ba3c507c70d359844165dc2dd4ad68c574277bb502e7b1d28573ab740250189\" returns successfully" Mar 3 12:47:47.391056 kubelet[3654]: I0303 12:47:47.390939 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2dvzl" podStartSLOduration=29.390917051 podStartE2EDuration="29.390917051s" podCreationTimestamp="2026-03-03 12:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:47.388275035 +0000 UTC m=+33.757624873" watchObservedRunningTime="2026-03-03 12:47:47.390917051 +0000 UTC m=+33.760266877" Mar 3 12:47:47.435584 kubelet[3654]: I0303 12:47:47.435455 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z98kl" podStartSLOduration=29.435405647 podStartE2EDuration="29.435405647s" podCreationTimestamp="2026-03-03 12:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:47.430990415 +0000 UTC m=+33.800340289" watchObservedRunningTime="2026-03-03 12:47:47.435405647 +0000 UTC m=+33.804755473" Mar 3 12:47:56.633611 systemd[1]: Started sshd@9-172.31.17.163:22-20.161.92.111:44348.service - OpenSSH per-connection server daemon (20.161.92.111:44348). Mar 3 12:47:57.122669 sshd[4978]: Accepted publickey for core from 20.161.92.111 port 44348 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:47:57.126042 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:47:57.136312 systemd-logind[1990]: New session 10 of user core. Mar 3 12:47:57.147204 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 12:47:57.528969 sshd[4981]: Connection closed by 20.161.92.111 port 44348 Mar 3 12:47:57.530000 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:57.537794 systemd[1]: sshd@9-172.31.17.163:22-20.161.92.111:44348.service: Deactivated successfully. Mar 3 12:47:57.544417 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 12:47:57.547202 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Mar 3 12:47:57.550462 systemd-logind[1990]: Removed session 10. Mar 3 12:48:02.621591 systemd[1]: Started sshd@10-172.31.17.163:22-20.161.92.111:40742.service - OpenSSH per-connection server daemon (20.161.92.111:40742). Mar 3 12:48:03.077225 sshd[4996]: Accepted publickey for core from 20.161.92.111 port 40742 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:03.079683 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:03.087975 systemd-logind[1990]: New session 11 of user core. Mar 3 12:48:03.102432 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 12:48:03.443694 sshd[4999]: Connection closed by 20.161.92.111 port 40742 Mar 3 12:48:03.443495 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:03.451763 systemd[1]: sshd@10-172.31.17.163:22-20.161.92.111:40742.service: Deactivated successfully. Mar 3 12:48:03.451770 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Mar 3 12:48:03.459036 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 12:48:03.463195 systemd-logind[1990]: Removed session 11. Mar 3 12:48:08.535268 systemd[1]: Started sshd@11-172.31.17.163:22-20.161.92.111:40748.service - OpenSSH per-connection server daemon (20.161.92.111:40748). Mar 3 12:48:08.999363 sshd[5012]: Accepted publickey for core from 20.161.92.111 port 40748 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:09.002044 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:09.011240 systemd-logind[1990]: New session 12 of user core. Mar 3 12:48:09.021441 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 12:48:09.372944 sshd[5015]: Connection closed by 20.161.92.111 port 40748 Mar 3 12:48:09.371807 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:09.379112 systemd[1]: sshd@11-172.31.17.163:22-20.161.92.111:40748.service: Deactivated successfully. Mar 3 12:48:09.385938 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 12:48:09.387848 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Mar 3 12:48:09.392275 systemd-logind[1990]: Removed session 12. Mar 3 12:48:14.468646 systemd[1]: Started sshd@12-172.31.17.163:22-20.161.92.111:58290.service - OpenSSH per-connection server daemon (20.161.92.111:58290). Mar 3 12:48:14.929182 sshd[5029]: Accepted publickey for core from 20.161.92.111 port 58290 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:14.931215 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:14.938713 systemd-logind[1990]: New session 13 of user core. Mar 3 12:48:14.948407 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 12:48:15.303258 sshd[5032]: Connection closed by 20.161.92.111 port 58290 Mar 3 12:48:15.303019 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:15.310989 systemd[1]: sshd@12-172.31.17.163:22-20.161.92.111:58290.service: Deactivated successfully. Mar 3 12:48:15.314249 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 12:48:15.318415 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Mar 3 12:48:15.320568 systemd-logind[1990]: Removed session 13. Mar 3 12:48:15.407808 systemd[1]: Started sshd@13-172.31.17.163:22-20.161.92.111:58300.service - OpenSSH per-connection server daemon (20.161.92.111:58300). Mar 3 12:48:15.906519 sshd[5044]: Accepted publickey for core from 20.161.92.111 port 58300 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:15.908802 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:15.918218 systemd-logind[1990]: New session 14 of user core. Mar 3 12:48:15.922615 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 12:48:16.369204 sshd[5047]: Connection closed by 20.161.92.111 port 58300 Mar 3 12:48:16.369472 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:16.378382 systemd[1]: sshd@13-172.31.17.163:22-20.161.92.111:58300.service: Deactivated successfully. Mar 3 12:48:16.384116 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 12:48:16.386189 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Mar 3 12:48:16.389428 systemd-logind[1990]: Removed session 14. Mar 3 12:48:16.469592 systemd[1]: Started sshd@14-172.31.17.163:22-20.161.92.111:58314.service - OpenSSH per-connection server daemon (20.161.92.111:58314). Mar 3 12:48:16.967209 sshd[5057]: Accepted publickey for core from 20.161.92.111 port 58314 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:16.969634 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:16.979057 systemd-logind[1990]: New session 15 of user core. Mar 3 12:48:16.990424 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 12:48:17.355303 sshd[5061]: Connection closed by 20.161.92.111 port 58314 Mar 3 12:48:17.356244 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:17.366618 systemd[1]: sshd@14-172.31.17.163:22-20.161.92.111:58314.service: Deactivated successfully. Mar 3 12:48:17.372535 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 12:48:17.375753 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Mar 3 12:48:17.378617 systemd-logind[1990]: Removed session 15. Mar 3 12:48:22.453459 systemd[1]: Started sshd@15-172.31.17.163:22-20.161.92.111:38864.service - OpenSSH per-connection server daemon (20.161.92.111:38864). Mar 3 12:48:22.943265 sshd[5077]: Accepted publickey for core from 20.161.92.111 port 38864 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:22.945828 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:22.953410 systemd-logind[1990]: New session 16 of user core. Mar 3 12:48:22.961775 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 12:48:23.339252 sshd[5080]: Connection closed by 20.161.92.111 port 38864 Mar 3 12:48:23.340122 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:23.348730 systemd[1]: sshd@15-172.31.17.163:22-20.161.92.111:38864.service: Deactivated successfully. Mar 3 12:48:23.353385 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 12:48:23.357696 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Mar 3 12:48:23.360943 systemd-logind[1990]: Removed session 16. Mar 3 12:48:28.427673 systemd[1]: Started sshd@16-172.31.17.163:22-20.161.92.111:38868.service - OpenSSH per-connection server daemon (20.161.92.111:38868). Mar 3 12:48:28.883389 sshd[5091]: Accepted publickey for core from 20.161.92.111 port 38868 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:28.886366 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:28.894376 systemd-logind[1990]: New session 17 of user core. Mar 3 12:48:28.902381 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 12:48:29.242169 sshd[5094]: Connection closed by 20.161.92.111 port 38868 Mar 3 12:48:29.243278 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:29.252220 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Mar 3 12:48:29.253614 systemd[1]: sshd@16-172.31.17.163:22-20.161.92.111:38868.service: Deactivated successfully. Mar 3 12:48:29.259607 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 12:48:29.264502 systemd-logind[1990]: Removed session 17. Mar 3 12:48:34.340774 systemd[1]: Started sshd@17-172.31.17.163:22-20.161.92.111:57470.service - OpenSSH per-connection server daemon (20.161.92.111:57470). Mar 3 12:48:34.811973 sshd[5107]: Accepted publickey for core from 20.161.92.111 port 57470 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:34.813697 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:34.821504 systemd-logind[1990]: New session 18 of user core. Mar 3 12:48:34.831424 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 12:48:35.180653 sshd[5110]: Connection closed by 20.161.92.111 port 57470 Mar 3 12:48:35.181821 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:35.195308 systemd[1]: sshd@17-172.31.17.163:22-20.161.92.111:57470.service: Deactivated successfully. Mar 3 12:48:35.202742 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 12:48:35.204606 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Mar 3 12:48:35.208018 systemd-logind[1990]: Removed session 18. Mar 3 12:48:35.277836 systemd[1]: Started sshd@18-172.31.17.163:22-20.161.92.111:57482.service - OpenSSH per-connection server daemon (20.161.92.111:57482). Mar 3 12:48:35.736769 sshd[5122]: Accepted publickey for core from 20.161.92.111 port 57482 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:35.738890 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:35.746616 systemd-logind[1990]: New session 19 of user core. Mar 3 12:48:35.758433 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 12:48:36.179353 sshd[5125]: Connection closed by 20.161.92.111 port 57482 Mar 3 12:48:36.181419 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:36.194333 systemd[1]: sshd@18-172.31.17.163:22-20.161.92.111:57482.service: Deactivated successfully. Mar 3 12:48:36.202938 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 12:48:36.205425 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Mar 3 12:48:36.208993 systemd-logind[1990]: Removed session 19. Mar 3 12:48:36.275031 systemd[1]: Started sshd@19-172.31.17.163:22-20.161.92.111:57498.service - OpenSSH per-connection server daemon (20.161.92.111:57498). Mar 3 12:48:36.748278 sshd[5134]: Accepted publickey for core from 20.161.92.111 port 57498 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:36.751072 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:36.759847 systemd-logind[1990]: New session 20 of user core. Mar 3 12:48:36.771408 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 12:48:37.808270 sshd[5137]: Connection closed by 20.161.92.111 port 57498 Mar 3 12:48:37.809225 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:37.819057 systemd[1]: sshd@19-172.31.17.163:22-20.161.92.111:57498.service: Deactivated successfully. Mar 3 12:48:37.827033 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 12:48:37.833003 systemd-logind[1990]: Session 20 logged out. Waiting for processes to exit. Mar 3 12:48:37.837245 systemd-logind[1990]: Removed session 20. Mar 3 12:48:37.903507 systemd[1]: Started sshd@20-172.31.17.163:22-20.161.92.111:57506.service - OpenSSH per-connection server daemon (20.161.92.111:57506). Mar 3 12:48:38.365538 sshd[5154]: Accepted publickey for core from 20.161.92.111 port 57506 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:38.367818 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:38.379392 systemd-logind[1990]: New session 21 of user core. Mar 3 12:48:38.398511 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 12:48:38.965186 sshd[5157]: Connection closed by 20.161.92.111 port 57506 Mar 3 12:48:38.964356 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:38.972003 systemd[1]: sshd@20-172.31.17.163:22-20.161.92.111:57506.service: Deactivated successfully. Mar 3 12:48:38.975457 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 12:48:38.977087 systemd-logind[1990]: Session 21 logged out. Waiting for processes to exit. Mar 3 12:48:38.980836 systemd-logind[1990]: Removed session 21. Mar 3 12:48:39.068713 systemd[1]: Started sshd@21-172.31.17.163:22-20.161.92.111:57516.service - OpenSSH per-connection server daemon (20.161.92.111:57516). Mar 3 12:48:39.559590 sshd[5167]: Accepted publickey for core from 20.161.92.111 port 57516 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:39.562100 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:39.570265 systemd-logind[1990]: New session 22 of user core. Mar 3 12:48:39.580415 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 12:48:39.934516 sshd[5170]: Connection closed by 20.161.92.111 port 57516 Mar 3 12:48:39.935444 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:39.945639 systemd[1]: sshd@21-172.31.17.163:22-20.161.92.111:57516.service: Deactivated successfully. Mar 3 12:48:39.951801 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 12:48:39.956438 systemd-logind[1990]: Session 22 logged out. Waiting for processes to exit. Mar 3 12:48:39.959311 systemd-logind[1990]: Removed session 22. Mar 3 12:48:45.021334 systemd[1]: Started sshd@22-172.31.17.163:22-20.161.92.111:45646.service - OpenSSH per-connection server daemon (20.161.92.111:45646). Mar 3 12:48:45.481855 sshd[5183]: Accepted publickey for core from 20.161.92.111 port 45646 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:45.484029 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:45.494251 systemd-logind[1990]: New session 23 of user core. Mar 3 12:48:45.500395 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 12:48:45.837621 sshd[5186]: Connection closed by 20.161.92.111 port 45646 Mar 3 12:48:45.838703 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:45.848443 systemd[1]: sshd@22-172.31.17.163:22-20.161.92.111:45646.service: Deactivated successfully. Mar 3 12:48:45.854826 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 12:48:45.857256 systemd-logind[1990]: Session 23 logged out. Waiting for processes to exit. Mar 3 12:48:45.860914 systemd-logind[1990]: Removed session 23. Mar 3 12:48:50.934827 systemd[1]: Started sshd@23-172.31.17.163:22-20.161.92.111:41054.service - OpenSSH per-connection server daemon (20.161.92.111:41054). Mar 3 12:48:51.390356 sshd[5200]: Accepted publickey for core from 20.161.92.111 port 41054 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:51.392462 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:51.400356 systemd-logind[1990]: New session 24 of user core. Mar 3 12:48:51.410428 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 12:48:51.747465 sshd[5203]: Connection closed by 20.161.92.111 port 41054 Mar 3 12:48:51.748752 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:51.754959 systemd[1]: sshd@23-172.31.17.163:22-20.161.92.111:41054.service: Deactivated successfully. Mar 3 12:48:51.760421 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 12:48:51.765843 systemd-logind[1990]: Session 24 logged out. Waiting for processes to exit. Mar 3 12:48:51.768196 systemd-logind[1990]: Removed session 24. Mar 3 12:48:51.839921 systemd[1]: Started sshd@24-172.31.17.163:22-20.161.92.111:41060.service - OpenSSH per-connection server daemon (20.161.92.111:41060). Mar 3 12:48:52.301240 sshd[5215]: Accepted publickey for core from 20.161.92.111 port 41060 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:52.303336 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:52.312232 systemd-logind[1990]: New session 25 of user core. Mar 3 12:48:52.322379 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 12:48:56.508499 containerd[2015]: time="2026-03-03T12:48:56.508412934Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:48:56.543994 containerd[2015]: time="2026-03-03T12:48:56.543930498Z" level=info msg="StopContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" with timeout 2 (s)" Mar 3 12:48:56.544525 containerd[2015]: time="2026-03-03T12:48:56.544478514Z" level=info msg="Stop container \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" with signal terminated" Mar 3 12:48:56.563667 systemd-networkd[1892]: lxc_health: Link DOWN Mar 3 12:48:56.563683 systemd-networkd[1892]: lxc_health: Lost carrier Mar 3 12:48:56.570739 containerd[2015]: time="2026-03-03T12:48:56.569392194Z" level=info msg="StopContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" with timeout 30 (s)" Mar 3 12:48:56.572538 containerd[2015]: time="2026-03-03T12:48:56.572445618Z" level=info msg="Stop container \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" with signal terminated" Mar 3 12:48:56.602431 systemd[1]: cri-containerd-1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782.scope: Deactivated successfully. Mar 3 12:48:56.603013 systemd[1]: cri-containerd-1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782.scope: Consumed 14.424s CPU time, 127.3M memory peak, 120K read from disk, 12.9M written to disk. Mar 3 12:48:56.608527 containerd[2015]: time="2026-03-03T12:48:56.608461698Z" level=info msg="received container exit event container_id:\"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" id:\"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" pid:4303 exited_at:{seconds:1772542136 nanos:607704078}" Mar 3 12:48:56.618957 systemd[1]: cri-containerd-f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee.scope: Deactivated successfully. Mar 3 12:48:56.625488 containerd[2015]: time="2026-03-03T12:48:56.625385719Z" level=info msg="received container exit event container_id:\"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" id:\"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" pid:4192 exited_at:{seconds:1772542136 nanos:624902755}" Mar 3 12:48:56.663451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782-rootfs.mount: Deactivated successfully. Mar 3 12:48:56.682651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee-rootfs.mount: Deactivated successfully. Mar 3 12:48:56.692825 containerd[2015]: time="2026-03-03T12:48:56.692647915Z" level=info msg="StopContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" returns successfully" Mar 3 12:48:56.694130 containerd[2015]: time="2026-03-03T12:48:56.694064167Z" level=info msg="StopPodSandbox for \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\"" Mar 3 12:48:56.695073 containerd[2015]: time="2026-03-03T12:48:56.694321819Z" level=info msg="Container to stop \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.698048 containerd[2015]: time="2026-03-03T12:48:56.697926367Z" level=info msg="StopContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" returns successfully" Mar 3 12:48:56.698894 containerd[2015]: time="2026-03-03T12:48:56.698841811Z" level=info msg="StopPodSandbox for \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\"" Mar 3 12:48:56.699008 containerd[2015]: time="2026-03-03T12:48:56.698943463Z" level=info msg="Container to stop \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.699008 containerd[2015]: time="2026-03-03T12:48:56.698970103Z" level=info msg="Container to stop \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.699008 containerd[2015]: time="2026-03-03T12:48:56.698991199Z" level=info msg="Container to stop \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.699203 containerd[2015]: time="2026-03-03T12:48:56.699016939Z" level=info msg="Container to stop \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.699203 containerd[2015]: time="2026-03-03T12:48:56.699040111Z" level=info msg="Container to stop \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 12:48:56.713638 systemd[1]: cri-containerd-d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b.scope: Deactivated successfully. Mar 3 12:48:56.717403 systemd[1]: cri-containerd-6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2.scope: Deactivated successfully. Mar 3 12:48:56.721809 containerd[2015]: time="2026-03-03T12:48:56.721716823Z" level=info msg="received sandbox exit event container_id:\"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" id:\"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" exit_status:137 exited_at:{seconds:1772542136 nanos:717319123}" monitor_name=podsandbox Mar 3 12:48:56.729063 containerd[2015]: time="2026-03-03T12:48:56.728903623Z" level=info msg="received sandbox exit event container_id:\"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" id:\"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" exit_status:137 exited_at:{seconds:1772542136 nanos:728435119}" monitor_name=podsandbox Mar 3 12:48:56.773111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b-rootfs.mount: Deactivated successfully. Mar 3 12:48:56.781597 containerd[2015]: time="2026-03-03T12:48:56.781523767Z" level=info msg="shim disconnected" id=d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b namespace=k8s.io Mar 3 12:48:56.781597 containerd[2015]: time="2026-03-03T12:48:56.781578535Z" level=warning msg="cleaning up after shim disconnected" id=d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b namespace=k8s.io Mar 3 12:48:56.781597 containerd[2015]: time="2026-03-03T12:48:56.781630891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 12:48:56.783148 containerd[2015]: time="2026-03-03T12:48:56.783037867Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Mar 3 12:48:56.790011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2-rootfs.mount: Deactivated successfully. Mar 3 12:48:56.794986 containerd[2015]: time="2026-03-03T12:48:56.794610859Z" level=info msg="shim disconnected" id=6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2 namespace=k8s.io Mar 3 12:48:56.794986 containerd[2015]: time="2026-03-03T12:48:56.794683855Z" level=warning msg="cleaning up after shim disconnected" id=6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2 namespace=k8s.io Mar 3 12:48:56.794986 containerd[2015]: time="2026-03-03T12:48:56.794732827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 12:48:56.820222 containerd[2015]: time="2026-03-03T12:48:56.820101199Z" level=info msg="TearDown network for sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" successfully" Mar 3 12:48:56.820222 containerd[2015]: time="2026-03-03T12:48:56.820209043Z" level=info msg="StopPodSandbox for \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" returns successfully" Mar 3 12:48:56.820693 containerd[2015]: time="2026-03-03T12:48:56.820618939Z" level=info msg="received sandbox container exit event sandbox_id:\"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" exit_status:137 exited_at:{seconds:1772542136 nanos:717319123}" monitor_name=criService Mar 3 12:48:56.822646 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b-shm.mount: Deactivated successfully. Mar 3 12:48:56.834652 containerd[2015]: time="2026-03-03T12:48:56.834305588Z" level=info msg="received sandbox container exit event sandbox_id:\"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" exit_status:137 exited_at:{seconds:1772542136 nanos:728435119}" monitor_name=criService Mar 3 12:48:56.835627 containerd[2015]: time="2026-03-03T12:48:56.835571420Z" level=info msg="TearDown network for sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" successfully" Mar 3 12:48:56.836588 containerd[2015]: time="2026-03-03T12:48:56.835879268Z" level=info msg="StopPodSandbox for \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" returns successfully" Mar 3 12:48:56.913811 kubelet[3654]: I0303 12:48:56.913741 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-cgroup\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.913849 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-bpf-maps\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.913926 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cni-path\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.913999 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-run\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.914103 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18a27539-2792-4802-81f2-44e7006ce455-cilium-config-path\") pod \"18a27539-2792-4802-81f2-44e7006ce455\" (UID: \"18a27539-2792-4802-81f2-44e7006ce455\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.914220 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-xtables-lock\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.915964 kubelet[3654]: I0303 12:48:56.914256 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-lib-modules\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914328 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-kernel\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914403 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-etc-cni-netd\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914437 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-hostproc\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914513 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nt7z\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-kube-api-access-4nt7z\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914590 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-net\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916345 kubelet[3654]: I0303 12:48:56.914671 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-hubble-tls\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916643 kubelet[3654]: I0303 12:48:56.914711 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxqws\" (UniqueName: \"kubernetes.io/projected/18a27539-2792-4802-81f2-44e7006ce455-kube-api-access-pxqws\") pod \"18a27539-2792-4802-81f2-44e7006ce455\" (UID: \"18a27539-2792-4802-81f2-44e7006ce455\") " Mar 3 12:48:56.916643 kubelet[3654]: I0303 12:48:56.914828 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-config-path\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916643 kubelet[3654]: I0303 12:48:56.914871 3654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e2d770e-339c-407d-8504-9dba62c5b666-clustermesh-secrets\") pod \"4e2d770e-339c-407d-8504-9dba62c5b666\" (UID: \"4e2d770e-339c-407d-8504-9dba62c5b666\") " Mar 3 12:48:56.916643 kubelet[3654]: I0303 12:48:56.915198 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.916643 kubelet[3654]: I0303 12:48:56.915273 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.918406 kubelet[3654]: I0303 12:48:56.915314 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.918406 kubelet[3654]: I0303 12:48:56.915350 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cni-path" (OuterVolumeSpecName: "cni-path") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.918406 kubelet[3654]: I0303 12:48:56.915384 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.920999 kubelet[3654]: I0303 12:48:56.920921 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.921265 kubelet[3654]: I0303 12:48:56.921010 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-hostproc" (OuterVolumeSpecName: "hostproc") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.924808 kubelet[3654]: I0303 12:48:56.924315 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.924808 kubelet[3654]: I0303 12:48:56.924392 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.927719 kubelet[3654]: I0303 12:48:56.927658 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a27539-2792-4802-81f2-44e7006ce455-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18a27539-2792-4802-81f2-44e7006ce455" (UID: "18a27539-2792-4802-81f2-44e7006ce455"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:48:56.932286 kubelet[3654]: I0303 12:48:56.930296 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 12:48:56.935195 kubelet[3654]: I0303 12:48:56.934569 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e2d770e-339c-407d-8504-9dba62c5b666-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 12:48:56.936976 kubelet[3654]: I0303 12:48:56.936866 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-kube-api-access-4nt7z" (OuterVolumeSpecName: "kube-api-access-4nt7z") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "kube-api-access-4nt7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:48:56.942943 kubelet[3654]: I0303 12:48:56.942881 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:48:56.944805 kubelet[3654]: I0303 12:48:56.944670 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a27539-2792-4802-81f2-44e7006ce455-kube-api-access-pxqws" (OuterVolumeSpecName: "kube-api-access-pxqws") pod "18a27539-2792-4802-81f2-44e7006ce455" (UID: "18a27539-2792-4802-81f2-44e7006ce455"). InnerVolumeSpecName "kube-api-access-pxqws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:48:56.947233 kubelet[3654]: I0303 12:48:56.947123 3654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e2d770e-339c-407d-8504-9dba62c5b666" (UID: "4e2d770e-339c-407d-8504-9dba62c5b666"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015564 3654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-config-path\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015613 3654 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e2d770e-339c-407d-8504-9dba62c5b666-clustermesh-secrets\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015636 3654 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-cgroup\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015656 3654 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-bpf-maps\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015680 3654 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cni-path\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015701 3654 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-cilium-run\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015723 3654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18a27539-2792-4802-81f2-44e7006ce455-cilium-config-path\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.015941 kubelet[3654]: I0303 12:48:57.015743 3654 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-xtables-lock\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015762 3654 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-lib-modules\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015784 3654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-kernel\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015805 3654 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-etc-cni-netd\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015824 3654 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-hostproc\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015844 3654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nt7z\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-kube-api-access-4nt7z\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015866 3654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e2d770e-339c-407d-8504-9dba62c5b666-host-proc-sys-net\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015887 3654 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e2d770e-339c-407d-8504-9dba62c5b666-hubble-tls\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.016465 kubelet[3654]: I0303 12:48:57.015906 3654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pxqws\" (UniqueName: \"kubernetes.io/projected/18a27539-2792-4802-81f2-44e7006ce455-kube-api-access-pxqws\") on node \"ip-172-31-17-163\" DevicePath \"\"" Mar 3 12:48:57.589165 kubelet[3654]: I0303 12:48:57.588282 3654 scope.go:117] "RemoveContainer" containerID="f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee" Mar 3 12:48:57.598090 containerd[2015]: time="2026-03-03T12:48:57.598026427Z" level=info msg="RemoveContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\"" Mar 3 12:48:57.603590 systemd[1]: Removed slice kubepods-besteffort-pod18a27539_2792_4802_81f2_44e7006ce455.slice - libcontainer container kubepods-besteffort-pod18a27539_2792_4802_81f2_44e7006ce455.slice. Mar 3 12:48:57.614122 containerd[2015]: time="2026-03-03T12:48:57.614069731Z" level=info msg="RemoveContainer for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" returns successfully" Mar 3 12:48:57.615778 kubelet[3654]: I0303 12:48:57.615742 3654 scope.go:117] "RemoveContainer" containerID="f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee" Mar 3 12:48:57.616938 containerd[2015]: time="2026-03-03T12:48:57.616506763Z" level=error msg="ContainerStatus for \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\": not found" Mar 3 12:48:57.619194 kubelet[3654]: E0303 12:48:57.619103 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\": not found" containerID="f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee" Mar 3 12:48:57.619351 kubelet[3654]: I0303 12:48:57.619194 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee"} err="failed to get container status \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9b7136e204a4ba333d9dc781b41923caaa3eab4b10f91b6ce30337e758098ee\": not found" Mar 3 12:48:57.619351 kubelet[3654]: I0303 12:48:57.619262 3654 scope.go:117] "RemoveContainer" containerID="1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782" Mar 3 12:48:57.626465 containerd[2015]: time="2026-03-03T12:48:57.626391415Z" level=info msg="RemoveContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\"" Mar 3 12:48:57.637062 systemd[1]: Removed slice kubepods-burstable-pod4e2d770e_339c_407d_8504_9dba62c5b666.slice - libcontainer container kubepods-burstable-pod4e2d770e_339c_407d_8504_9dba62c5b666.slice. Mar 3 12:48:57.637942 systemd[1]: kubepods-burstable-pod4e2d770e_339c_407d_8504_9dba62c5b666.slice: Consumed 14.639s CPU time, 127.8M memory peak, 120K read from disk, 12.9M written to disk. Mar 3 12:48:57.647888 containerd[2015]: time="2026-03-03T12:48:57.647589824Z" level=info msg="RemoveContainer for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" returns successfully" Mar 3 12:48:57.648280 kubelet[3654]: I0303 12:48:57.648231 3654 scope.go:117] "RemoveContainer" containerID="6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5" Mar 3 12:48:57.656845 containerd[2015]: time="2026-03-03T12:48:57.656721956Z" level=info msg="RemoveContainer for \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\"" Mar 3 12:48:57.667556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2-shm.mount: Deactivated successfully. Mar 3 12:48:57.667772 systemd[1]: var-lib-kubelet-pods-18a27539\x2d2792\x2d4802\x2d81f2\x2d44e7006ce455-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxqws.mount: Deactivated successfully. Mar 3 12:48:57.667904 systemd[1]: var-lib-kubelet-pods-4e2d770e\x2d339c\x2d407d\x2d8504\x2d9dba62c5b666-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nt7z.mount: Deactivated successfully. Mar 3 12:48:57.668042 systemd[1]: var-lib-kubelet-pods-4e2d770e\x2d339c\x2d407d\x2d8504\x2d9dba62c5b666-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 3 12:48:57.669682 systemd[1]: var-lib-kubelet-pods-4e2d770e\x2d339c\x2d407d\x2d8504\x2d9dba62c5b666-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 3 12:48:57.685173 containerd[2015]: time="2026-03-03T12:48:57.684985472Z" level=info msg="RemoveContainer for \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" returns successfully" Mar 3 12:48:57.685985 kubelet[3654]: I0303 12:48:57.685844 3654 scope.go:117] "RemoveContainer" containerID="e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201" Mar 3 12:48:57.697869 containerd[2015]: time="2026-03-03T12:48:57.697772228Z" level=info msg="RemoveContainer for \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\"" Mar 3 12:48:57.706193 containerd[2015]: time="2026-03-03T12:48:57.705735380Z" level=info msg="RemoveContainer for \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" returns successfully" Mar 3 12:48:57.706407 kubelet[3654]: I0303 12:48:57.706364 3654 scope.go:117] "RemoveContainer" containerID="d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc" Mar 3 12:48:57.709831 containerd[2015]: time="2026-03-03T12:48:57.709761776Z" level=info msg="RemoveContainer for \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\"" Mar 3 12:48:57.724397 containerd[2015]: time="2026-03-03T12:48:57.724073360Z" level=info msg="RemoveContainer for \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" returns successfully" Mar 3 12:48:57.724779 kubelet[3654]: I0303 12:48:57.724594 3654 scope.go:117] "RemoveContainer" containerID="ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852" Mar 3 12:48:57.728100 containerd[2015]: time="2026-03-03T12:48:57.728031848Z" level=info msg="RemoveContainer for \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\"" Mar 3 12:48:57.735395 containerd[2015]: time="2026-03-03T12:48:57.735327056Z" level=info msg="RemoveContainer for \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" returns successfully" Mar 3 12:48:57.735706 kubelet[3654]: I0303 12:48:57.735626 3654 scope.go:117] "RemoveContainer" containerID="1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782" Mar 3 12:48:57.736050 containerd[2015]: time="2026-03-03T12:48:57.735959960Z" level=error msg="ContainerStatus for \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\": not found" Mar 3 12:48:57.736453 kubelet[3654]: E0303 12:48:57.736407 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\": not found" containerID="1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782" Mar 3 12:48:57.736555 kubelet[3654]: I0303 12:48:57.736457 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782"} err="failed to get container status \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c46c2115587c83e55d78192a9e6c15bfe938e021ca898886baf854687f782\": not found" Mar 3 12:48:57.736555 kubelet[3654]: I0303 12:48:57.736493 3654 scope.go:117] "RemoveContainer" containerID="6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5" Mar 3 12:48:57.736929 containerd[2015]: time="2026-03-03T12:48:57.736865432Z" level=error msg="ContainerStatus for \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\": not found" Mar 3 12:48:57.737127 kubelet[3654]: E0303 12:48:57.737084 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\": not found" containerID="6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5" Mar 3 12:48:57.737248 kubelet[3654]: I0303 12:48:57.737197 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5"} err="failed to get container status \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6367e693762f47771c2a18a82c61e469236d0ec615aeb29c4cf20b76994745d5\": not found" Mar 3 12:48:57.737440 kubelet[3654]: I0303 12:48:57.737246 3654 scope.go:117] "RemoveContainer" containerID="e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201" Mar 3 12:48:57.737865 containerd[2015]: time="2026-03-03T12:48:57.737804900Z" level=error msg="ContainerStatus for \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\": not found" Mar 3 12:48:57.738430 kubelet[3654]: E0303 12:48:57.738099 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\": not found" containerID="e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201" Mar 3 12:48:57.738430 kubelet[3654]: I0303 12:48:57.738263 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201"} err="failed to get container status \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5d0aabf6c901f3da48a5120b3873efc4c9833fcac71e9531ebb519a5d12a201\": not found" Mar 3 12:48:57.738430 kubelet[3654]: I0303 12:48:57.738297 3654 scope.go:117] "RemoveContainer" containerID="d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc" Mar 3 12:48:57.739035 containerd[2015]: time="2026-03-03T12:48:57.738962996Z" level=error msg="ContainerStatus for \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\": not found" Mar 3 12:48:57.739440 kubelet[3654]: E0303 12:48:57.739358 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\": not found" containerID="d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc" Mar 3 12:48:57.739440 kubelet[3654]: I0303 12:48:57.739415 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc"} err="failed to get container status \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d98457e8879131c34d6de4c485f942215d910197f0c4f5d616763e0cb44a57fc\": not found" Mar 3 12:48:57.739756 kubelet[3654]: I0303 12:48:57.739450 3654 scope.go:117] "RemoveContainer" containerID="ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852" Mar 3 12:48:57.739816 containerd[2015]: time="2026-03-03T12:48:57.739755824Z" level=error msg="ContainerStatus for \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\": not found" Mar 3 12:48:57.740145 kubelet[3654]: E0303 12:48:57.740104 3654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\": not found" containerID="ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852" Mar 3 12:48:57.740242 kubelet[3654]: I0303 12:48:57.740182 3654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852"} err="failed to get container status \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebd682398042a33476c19e1f42af8d1c177762e103d8052f960b3820cd5bd852\": not found" Mar 3 12:48:57.978186 kubelet[3654]: I0303 12:48:57.977272 3654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a27539-2792-4802-81f2-44e7006ce455" path="/var/lib/kubelet/pods/18a27539-2792-4802-81f2-44e7006ce455/volumes" Mar 3 12:48:57.978930 kubelet[3654]: I0303 12:48:57.978896 3654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e2d770e-339c-407d-8504-9dba62c5b666" path="/var/lib/kubelet/pods/4e2d770e-339c-407d-8504-9dba62c5b666/volumes" Mar 3 12:48:58.373316 sshd[5218]: Connection closed by 20.161.92.111 port 41060 Mar 3 12:48:58.374515 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:58.383947 systemd[1]: sshd@24-172.31.17.163:22-20.161.92.111:41060.service: Deactivated successfully. Mar 3 12:48:58.387518 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 12:48:58.387904 systemd[1]: session-25.scope: Consumed 3.221s CPU time, 23.7M memory peak. Mar 3 12:48:58.389676 systemd-logind[1990]: Session 25 logged out. Waiting for processes to exit. Mar 3 12:48:58.393076 systemd-logind[1990]: Removed session 25. Mar 3 12:48:58.467775 systemd[1]: Started sshd@25-172.31.17.163:22-20.161.92.111:41076.service - OpenSSH per-connection server daemon (20.161.92.111:41076). Mar 3 12:48:58.622777 ntpd[2198]: Deleting 10 lxc_health, [fe80::f051:94ff:fe9f:fd84%8]:123, stats: received=0, sent=0, dropped=0, active_time=76 secs Mar 3 12:48:58.623421 ntpd[2198]: 3 Mar 12:48:58 ntpd[2198]: Deleting 10 lxc_health, [fe80::f051:94ff:fe9f:fd84%8]:123, stats: received=0, sent=0, dropped=0, active_time=76 secs Mar 3 12:48:58.930267 sshd[5371]: Accepted publickey for core from 20.161.92.111 port 41076 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:58.932830 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:58.940411 systemd-logind[1990]: New session 26 of user core. Mar 3 12:48:58.946391 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 12:48:59.303832 kubelet[3654]: E0303 12:48:59.303503 3654 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 12:49:00.352680 systemd[1]: Created slice kubepods-burstable-podec12e22c_966a_4422_94a7_0b5bc7180dd1.slice - libcontainer container kubepods-burstable-podec12e22c_966a_4422_94a7_0b5bc7180dd1.slice. Mar 3 12:49:00.366979 sshd[5374]: Connection closed by 20.161.92.111 port 41076 Mar 3 12:49:00.366834 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:00.380474 systemd[1]: sshd@25-172.31.17.163:22-20.161.92.111:41076.service: Deactivated successfully. Mar 3 12:49:00.388979 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 12:49:00.389545 systemd[1]: session-26.scope: Consumed 1.077s CPU time, 21.4M memory peak. Mar 3 12:49:00.392985 systemd-logind[1990]: Session 26 logged out. Waiting for processes to exit. Mar 3 12:49:00.397328 systemd-logind[1990]: Removed session 26. Mar 3 12:49:00.435164 kubelet[3654]: I0303 12:49:00.435091 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-bpf-maps\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436590 kubelet[3654]: I0303 12:49:00.435825 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec12e22c-966a-4422-94a7-0b5bc7180dd1-clustermesh-secrets\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436590 kubelet[3654]: I0303 12:49:00.435895 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec12e22c-966a-4422-94a7-0b5bc7180dd1-cilium-config-path\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436590 kubelet[3654]: I0303 12:49:00.435948 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lpn\" (UniqueName: \"kubernetes.io/projected/ec12e22c-966a-4422-94a7-0b5bc7180dd1-kube-api-access-b4lpn\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436590 kubelet[3654]: I0303 12:49:00.435989 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-lib-modules\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436590 kubelet[3654]: I0303 12:49:00.436025 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec12e22c-966a-4422-94a7-0b5bc7180dd1-cilium-ipsec-secrets\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436070 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-cilium-run\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436103 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-hostproc\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436172 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-cilium-cgroup\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436216 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-xtables-lock\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436277 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-host-proc-sys-net\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.436928 kubelet[3654]: I0303 12:49:00.436365 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-cni-path\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.438367 kubelet[3654]: I0303 12:49:00.436413 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-host-proc-sys-kernel\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.438367 kubelet[3654]: I0303 12:49:00.436447 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec12e22c-966a-4422-94a7-0b5bc7180dd1-hubble-tls\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.438367 kubelet[3654]: I0303 12:49:00.436491 3654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec12e22c-966a-4422-94a7-0b5bc7180dd1-etc-cni-netd\") pod \"cilium-bz24f\" (UID: \"ec12e22c-966a-4422-94a7-0b5bc7180dd1\") " pod="kube-system/cilium-bz24f" Mar 3 12:49:00.467795 systemd[1]: Started sshd@26-172.31.17.163:22-20.161.92.111:53298.service - OpenSSH per-connection server daemon (20.161.92.111:53298). Mar 3 12:49:00.661189 containerd[2015]: time="2026-03-03T12:49:00.661108175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz24f,Uid:ec12e22c-966a-4422-94a7-0b5bc7180dd1,Namespace:kube-system,Attempt:0,}" Mar 3 12:49:00.697461 containerd[2015]: time="2026-03-03T12:49:00.697401011Z" level=info msg="connecting to shim 6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:49:00.734444 systemd[1]: Started cri-containerd-6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661.scope - libcontainer container 6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661. Mar 3 12:49:00.789681 containerd[2015]: time="2026-03-03T12:49:00.789579443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz24f,Uid:ec12e22c-966a-4422-94a7-0b5bc7180dd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\"" Mar 3 12:49:00.800951 containerd[2015]: time="2026-03-03T12:49:00.800891363Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 12:49:00.815202 containerd[2015]: time="2026-03-03T12:49:00.814427171Z" level=info msg="Container cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:00.827566 containerd[2015]: time="2026-03-03T12:49:00.827379119Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d\"" Mar 3 12:49:00.828249 containerd[2015]: time="2026-03-03T12:49:00.828169991Z" level=info msg="StartContainer for \"cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d\"" Mar 3 12:49:00.832275 containerd[2015]: time="2026-03-03T12:49:00.832130687Z" level=info msg="connecting to shim cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" protocol=ttrpc version=3 Mar 3 12:49:00.870523 systemd[1]: Started cri-containerd-cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d.scope - libcontainer container cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d. Mar 3 12:49:00.933599 containerd[2015]: time="2026-03-03T12:49:00.933421356Z" level=info msg="StartContainer for \"cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d\" returns successfully" Mar 3 12:49:00.940272 sshd[5385]: Accepted publickey for core from 20.161.92.111 port 53298 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:00.943690 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:00.957632 systemd-logind[1990]: New session 27 of user core. Mar 3 12:49:00.965540 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 12:49:00.966587 systemd[1]: cri-containerd-cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d.scope: Deactivated successfully. Mar 3 12:49:00.974744 containerd[2015]: time="2026-03-03T12:49:00.974660244Z" level=info msg="received container exit event container_id:\"cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d\" id:\"cb8154e07594b5555882cee6083076af51ddd6472618ec41e09830fc9f9b692d\" pid:5453 exited_at:{seconds:1772542140 nanos:973615572}" Mar 3 12:49:01.178867 sshd[5472]: Connection closed by 20.161.92.111 port 53298 Mar 3 12:49:01.180340 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:01.188785 systemd[1]: sshd@26-172.31.17.163:22-20.161.92.111:53298.service: Deactivated successfully. Mar 3 12:49:01.189190 systemd-logind[1990]: Session 27 logged out. Waiting for processes to exit. Mar 3 12:49:01.197852 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 12:49:01.204926 systemd-logind[1990]: Removed session 27. Mar 3 12:49:01.270479 systemd[1]: Started sshd@27-172.31.17.163:22-20.161.92.111:53312.service - OpenSSH per-connection server daemon (20.161.92.111:53312). Mar 3 12:49:01.640783 containerd[2015]: time="2026-03-03T12:49:01.640647035Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 12:49:01.658645 containerd[2015]: time="2026-03-03T12:49:01.658383252Z" level=info msg="Container 198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:01.687812 containerd[2015]: time="2026-03-03T12:49:01.687648036Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1\"" Mar 3 12:49:01.689985 containerd[2015]: time="2026-03-03T12:49:01.689885016Z" level=info msg="StartContainer for \"198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1\"" Mar 3 12:49:01.692643 containerd[2015]: time="2026-03-03T12:49:01.691685484Z" level=info msg="connecting to shim 198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" protocol=ttrpc version=3 Mar 3 12:49:01.736629 sshd[5491]: Accepted publickey for core from 20.161.92.111 port 53312 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:01.739681 systemd[1]: Started cri-containerd-198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1.scope - libcontainer container 198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1. Mar 3 12:49:01.745668 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:01.764509 systemd-logind[1990]: New session 28 of user core. Mar 3 12:49:01.773657 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 12:49:01.831996 containerd[2015]: time="2026-03-03T12:49:01.831918096Z" level=info msg="StartContainer for \"198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1\" returns successfully" Mar 3 12:49:01.847450 systemd[1]: cri-containerd-198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1.scope: Deactivated successfully. Mar 3 12:49:01.852638 containerd[2015]: time="2026-03-03T12:49:01.852555216Z" level=info msg="received container exit event container_id:\"198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1\" id:\"198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1\" pid:5508 exited_at:{seconds:1772542141 nanos:852045408}" Mar 3 12:49:01.899877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-198ad2e25fa3fab275d0e31bb6ac1f13230e87165dbdd441a8ebfa3ebf560dd1-rootfs.mount: Deactivated successfully. Mar 3 12:49:02.652118 containerd[2015]: time="2026-03-03T12:49:02.651992916Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 12:49:02.693923 containerd[2015]: time="2026-03-03T12:49:02.692927497Z" level=info msg="Container 934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:02.705546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707522991.mount: Deactivated successfully. Mar 3 12:49:02.715567 containerd[2015]: time="2026-03-03T12:49:02.715394569Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae\"" Mar 3 12:49:02.717634 containerd[2015]: time="2026-03-03T12:49:02.717487033Z" level=info msg="StartContainer for \"934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae\"" Mar 3 12:49:02.722162 containerd[2015]: time="2026-03-03T12:49:02.721955005Z" level=info msg="connecting to shim 934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" protocol=ttrpc version=3 Mar 3 12:49:02.762458 systemd[1]: Started cri-containerd-934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae.scope - libcontainer container 934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae. Mar 3 12:49:02.879479 containerd[2015]: time="2026-03-03T12:49:02.879396074Z" level=info msg="StartContainer for \"934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae\" returns successfully" Mar 3 12:49:02.881097 systemd[1]: cri-containerd-934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae.scope: Deactivated successfully. Mar 3 12:49:02.886976 containerd[2015]: time="2026-03-03T12:49:02.886896458Z" level=info msg="received container exit event container_id:\"934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae\" id:\"934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae\" pid:5561 exited_at:{seconds:1772542142 nanos:886587314}" Mar 3 12:49:03.661801 containerd[2015]: time="2026-03-03T12:49:03.660673633Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 12:49:03.675592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-934a2786800dfa04639efd85a0776a0cc6509324ca360a06290b4ac2216cd2ae-rootfs.mount: Deactivated successfully. Mar 3 12:49:03.686200 containerd[2015]: time="2026-03-03T12:49:03.685702430Z" level=info msg="Container 79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:03.708354 containerd[2015]: time="2026-03-03T12:49:03.708292166Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823\"" Mar 3 12:49:03.711178 containerd[2015]: time="2026-03-03T12:49:03.710037278Z" level=info msg="StartContainer for \"79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823\"" Mar 3 12:49:03.711858 containerd[2015]: time="2026-03-03T12:49:03.711776858Z" level=info msg="connecting to shim 79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" protocol=ttrpc version=3 Mar 3 12:49:03.781519 systemd[1]: Started cri-containerd-79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823.scope - libcontainer container 79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823. Mar 3 12:49:03.844980 systemd[1]: cri-containerd-79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823.scope: Deactivated successfully. Mar 3 12:49:03.848200 containerd[2015]: time="2026-03-03T12:49:03.848026874Z" level=info msg="received container exit event container_id:\"79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823\" id:\"79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823\" pid:5604 exited_at:{seconds:1772542143 nanos:844806530}" Mar 3 12:49:03.863772 containerd[2015]: time="2026-03-03T12:49:03.863713250Z" level=info msg="StartContainer for \"79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823\" returns successfully" Mar 3 12:49:03.893597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ec6f0431809a587fdf44a16edb06bf645b413e9084b7df2a875da24daf7823-rootfs.mount: Deactivated successfully. Mar 3 12:49:04.305653 kubelet[3654]: E0303 12:49:04.305595 3654 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 12:49:04.670969 containerd[2015]: time="2026-03-03T12:49:04.670892774Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 12:49:04.710478 containerd[2015]: time="2026-03-03T12:49:04.710402811Z" level=info msg="Container 4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:04.712516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767407007.mount: Deactivated successfully. Mar 3 12:49:04.735811 containerd[2015]: time="2026-03-03T12:49:04.735539487Z" level=info msg="CreateContainer within sandbox \"6c0314deef5661b573a740de0e0b2d3c5705979c883d7e095d2885cdff451661\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559\"" Mar 3 12:49:04.738451 containerd[2015]: time="2026-03-03T12:49:04.738384387Z" level=info msg="StartContainer for \"4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559\"" Mar 3 12:49:04.742578 containerd[2015]: time="2026-03-03T12:49:04.742506711Z" level=info msg="connecting to shim 4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559" address="unix:///run/containerd/s/456f8d73e5d1fa36c5f911253e5ef9e1b3e30c94485167589709b0f9036bf93c" protocol=ttrpc version=3 Mar 3 12:49:04.818166 systemd[1]: Started cri-containerd-4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559.scope - libcontainer container 4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559. Mar 3 12:49:04.943626 containerd[2015]: time="2026-03-03T12:49:04.943088668Z" level=info msg="StartContainer for \"4a90bcdb1253f2266b22f6ddfdcdddc052a80552104d0fa3fdf2844ce2159559\" returns successfully" Mar 3 12:49:05.721703 kubelet[3654]: I0303 12:49:05.721535 3654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bz24f" podStartSLOduration=5.721487452 podStartE2EDuration="5.721487452s" podCreationTimestamp="2026-03-03 12:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:49:05.719359144 +0000 UTC m=+112.088708994" watchObservedRunningTime="2026-03-03 12:49:05.721487452 +0000 UTC m=+112.090837290" Mar 3 12:49:05.796174 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 3 12:49:06.411695 kubelet[3654]: I0303 12:49:06.411569 3654 setters.go:618] "Node became not ready" node="ip-172-31-17-163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-03T12:49:06Z","lastTransitionTime":"2026-03-03T12:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 3 12:49:10.039627 (udev-worker)[6177]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:49:10.044038 systemd-networkd[1892]: lxc_health: Link UP Mar 3 12:49:10.049557 systemd-networkd[1892]: lxc_health: Gained carrier Mar 3 12:49:11.524391 systemd-networkd[1892]: lxc_health: Gained IPv6LL Mar 3 12:49:13.622831 ntpd[2198]: Listen normally on 13 lxc_health [fe80::30c6:87ff:fef8:67aa%14]:123 Mar 3 12:49:13.623427 ntpd[2198]: 3 Mar 12:49:13 ntpd[2198]: Listen normally on 13 lxc_health [fe80::30c6:87ff:fef8:67aa%14]:123 Mar 3 12:49:13.929911 containerd[2015]: time="2026-03-03T12:49:13.929497596Z" level=info msg="StopPodSandbox for \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\"" Mar 3 12:49:13.931172 containerd[2015]: time="2026-03-03T12:49:13.930637068Z" level=info msg="TearDown network for sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" successfully" Mar 3 12:49:13.931172 containerd[2015]: time="2026-03-03T12:49:13.930708936Z" level=info msg="StopPodSandbox for \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" returns successfully" Mar 3 12:49:13.932426 containerd[2015]: time="2026-03-03T12:49:13.932311020Z" level=info msg="RemovePodSandbox for \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\"" Mar 3 12:49:13.932655 containerd[2015]: time="2026-03-03T12:49:13.932389824Z" level=info msg="Forcibly stopping sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\"" Mar 3 12:49:13.932929 containerd[2015]: time="2026-03-03T12:49:13.932870412Z" level=info msg="TearDown network for sandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" successfully" Mar 3 12:49:13.935429 containerd[2015]: time="2026-03-03T12:49:13.935369148Z" level=info msg="Ensure that sandbox d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b in task-service has been cleanup successfully" Mar 3 12:49:13.953389 containerd[2015]: time="2026-03-03T12:49:13.953198125Z" level=info msg="RemovePodSandbox \"d6604275f6cbf6da08b8f8e33574ee82bc4a4e5ddac0605a72e60b2182b8aa5b\" returns successfully" Mar 3 12:49:13.954942 containerd[2015]: time="2026-03-03T12:49:13.954613717Z" level=info msg="StopPodSandbox for \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\"" Mar 3 12:49:13.954942 containerd[2015]: time="2026-03-03T12:49:13.954806725Z" level=info msg="TearDown network for sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" successfully" Mar 3 12:49:13.954942 containerd[2015]: time="2026-03-03T12:49:13.954832213Z" level=info msg="StopPodSandbox for \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" returns successfully" Mar 3 12:49:13.955774 containerd[2015]: time="2026-03-03T12:49:13.955711777Z" level=info msg="RemovePodSandbox for \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\"" Mar 3 12:49:13.956018 containerd[2015]: time="2026-03-03T12:49:13.955989109Z" level=info msg="Forcibly stopping sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\"" Mar 3 12:49:13.957044 containerd[2015]: time="2026-03-03T12:49:13.956268421Z" level=info msg="TearDown network for sandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" successfully" Mar 3 12:49:13.959356 containerd[2015]: time="2026-03-03T12:49:13.959309617Z" level=info msg="Ensure that sandbox 6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2 in task-service has been cleanup successfully" Mar 3 12:49:13.966839 containerd[2015]: time="2026-03-03T12:49:13.966749641Z" level=info msg="RemovePodSandbox \"6a6dfd726f124de431adea9b18a1b3cfb1f3505eafc6b0417c52e8bc6491feb2\" returns successfully" Mar 3 12:49:15.682176 sshd[5514]: Connection closed by 20.161.92.111 port 53312 Mar 3 12:49:15.683315 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:15.693507 systemd[1]: sshd@27-172.31.17.163:22-20.161.92.111:53312.service: Deactivated successfully. Mar 3 12:49:15.702953 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 12:49:15.705744 systemd-logind[1990]: Session 28 logged out. Waiting for processes to exit. Mar 3 12:49:15.711595 systemd-logind[1990]: Removed session 28. Mar 3 12:49:30.651921 systemd[1]: cri-containerd-b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825.scope: Deactivated successfully. Mar 3 12:49:30.653269 systemd[1]: cri-containerd-b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825.scope: Consumed 5.745s CPU time, 53M memory peak. Mar 3 12:49:30.658694 containerd[2015]: time="2026-03-03T12:49:30.658627444Z" level=info msg="received container exit event container_id:\"b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825\" id:\"b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825\" pid:3490 exit_status:1 exited_at:{seconds:1772542170 nanos:658098388}" Mar 3 12:49:30.700475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825-rootfs.mount: Deactivated successfully. Mar 3 12:49:30.761860 kubelet[3654]: I0303 12:49:30.761780 3654 scope.go:117] "RemoveContainer" containerID="b92c2707e7cf163448954b90b858f1eacee0497c754319558b50e59fef98f825" Mar 3 12:49:30.766165 containerd[2015]: time="2026-03-03T12:49:30.766027708Z" level=info msg="CreateContainer within sandbox \"211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 12:49:30.784332 containerd[2015]: time="2026-03-03T12:49:30.782482708Z" level=info msg="Container 92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:30.791837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822097391.mount: Deactivated successfully. Mar 3 12:49:30.803862 containerd[2015]: time="2026-03-03T12:49:30.803778028Z" level=info msg="CreateContainer within sandbox \"211ead717d54288020168e5eb1e32f7d26d6ee1fe32f5a511d75b4974f4985db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800\"" Mar 3 12:49:30.804799 containerd[2015]: time="2026-03-03T12:49:30.804698752Z" level=info msg="StartContainer for \"92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800\"" Mar 3 12:49:30.806796 containerd[2015]: time="2026-03-03T12:49:30.806730328Z" level=info msg="connecting to shim 92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800" address="unix:///run/containerd/s/e6b00b8ae182afd29970967ac85a8713fc34a87e5d33747759d66625c2e56578" protocol=ttrpc version=3 Mar 3 12:49:30.850819 systemd[1]: Started cri-containerd-92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800.scope - libcontainer container 92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800. Mar 3 12:49:30.945718 containerd[2015]: time="2026-03-03T12:49:30.945389981Z" level=info msg="StartContainer for \"92d7936678fa2ff221ba58360139fd5a3aa370fa3534324acd7f8d2bd9397800\" returns successfully" Mar 3 12:49:35.099796 systemd[1]: cri-containerd-446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7.scope: Deactivated successfully. Mar 3 12:49:35.101060 systemd[1]: cri-containerd-446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7.scope: Consumed 5.266s CPU time, 20.9M memory peak. Mar 3 12:49:35.105382 containerd[2015]: time="2026-03-03T12:49:35.105323622Z" level=info msg="received container exit event container_id:\"446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7\" id:\"446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7\" pid:3499 exit_status:1 exited_at:{seconds:1772542175 nanos:104632566}" Mar 3 12:49:35.147753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7-rootfs.mount: Deactivated successfully. Mar 3 12:49:35.786243 kubelet[3654]: I0303 12:49:35.786196 3654 scope.go:117] "RemoveContainer" containerID="446baf5bc3fcdd94208aa860dcbaf6783d265b121adb67eed807d1fe3c2105c7" Mar 3 12:49:35.791763 containerd[2015]: time="2026-03-03T12:49:35.791666061Z" level=info msg="CreateContainer within sandbox \"06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 12:49:35.814377 containerd[2015]: time="2026-03-03T12:49:35.814266861Z" level=info msg="Container baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:35.821190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373420813.mount: Deactivated successfully. Mar 3 12:49:35.837461 containerd[2015]: time="2026-03-03T12:49:35.837386049Z" level=info msg="CreateContainer within sandbox \"06bd6696c5b50368a7ad567154a4db07ce8271065e4e9f995dc1989d30c0b3e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062\"" Mar 3 12:49:35.838312 containerd[2015]: time="2026-03-03T12:49:35.838255773Z" level=info msg="StartContainer for \"baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062\"" Mar 3 12:49:35.840499 containerd[2015]: time="2026-03-03T12:49:35.840396321Z" level=info msg="connecting to shim baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062" address="unix:///run/containerd/s/0ff30e89b316dd8345da99ee765ef43b31b3dbdbfd46efad9ac6dfb0e6f1ffcd" protocol=ttrpc version=3 Mar 3 12:49:35.885427 systemd[1]: Started cri-containerd-baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062.scope - libcontainer container baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062. Mar 3 12:49:35.970426 containerd[2015]: time="2026-03-03T12:49:35.970358350Z" level=info msg="StartContainer for \"baff81b99c8dc871c690e79c2521a81370749c79bb0bedff88880067c728a062\" returns successfully" Mar 3 12:49:37.057013 kubelet[3654]: E0303 12:49:37.056942 3654 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 3 12:49:47.058464 kubelet[3654]: E0303 12:49:47.058317 3654 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"