Jun 20 18:23:11.107585 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 20 18:23:11.107630 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 18:23:11.107655 kernel: KASLR disabled due to lack of seed Jun 20 18:23:11.107671 kernel: efi: EFI v2.7 by EDK II Jun 20 18:23:11.107686 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jun 20 18:23:11.107701 kernel: secureboot: Secure boot disabled Jun 20 18:23:11.107718 kernel: ACPI: Early table checksum verification disabled Jun 20 18:23:11.107733 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 20 18:23:11.107748 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 18:23:11.107764 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 18:23:11.107784 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 20 18:23:11.107800 kernel: ACPI: FACS 0x0000000078630000 000040 Jun 20 18:23:11.107816 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 18:23:11.107831 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 20 18:23:11.107886 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 20 18:23:11.107904 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 20 18:23:11.107926 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 18:23:11.107942 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 20 18:23:11.107958 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 20 18:23:11.107974 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 20 18:23:11.107989 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 20 18:23:11.108007 kernel: printk: legacy bootconsole [uart0] enabled Jun 20 18:23:11.108024 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 18:23:11.108040 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:23:11.108056 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] Jun 20 18:23:11.108072 kernel: Zone ranges: Jun 20 18:23:11.108091 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 20 18:23:11.108107 kernel: DMA32 empty Jun 20 18:23:11.108123 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 20 18:23:11.108138 kernel: Device empty Jun 20 18:23:11.108153 kernel: Movable zone start for each node Jun 20 18:23:11.108169 kernel: Early memory node ranges Jun 20 18:23:11.108185 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 20 18:23:11.108200 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 20 18:23:11.108216 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 20 18:23:11.108231 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 20 18:23:11.108247 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 20 18:23:11.108263 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 20 18:23:11.108282 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 20 18:23:11.108299 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 20 18:23:11.108321 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:23:11.108338 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 20 18:23:11.108354 kernel: psci: probing for conduit method from ACPI. Jun 20 18:23:11.108374 kernel: psci: PSCIv1.0 detected in firmware. Jun 20 18:23:11.108390 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:23:11.108407 kernel: psci: Trusted OS migration not required Jun 20 18:23:11.108423 kernel: psci: SMC Calling Convention v1.1 Jun 20 18:23:11.108439 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 18:23:11.108455 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 18:23:11.108472 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:23:11.108489 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:23:11.108505 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:23:11.108521 kernel: CPU features: detected: Spectre-v2 Jun 20 18:23:11.108538 kernel: CPU features: detected: Spectre-v3a Jun 20 18:23:11.108554 kernel: CPU features: detected: Spectre-BHB Jun 20 18:23:11.108575 kernel: CPU features: detected: ARM erratum 1742098 Jun 20 18:23:11.108591 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 20 18:23:11.108608 kernel: alternatives: applying boot alternatives Jun 20 18:23:11.108627 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:23:11.108645 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:23:11.108662 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:23:11.108679 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:23:11.108695 kernel: Fallback order for Node 0: 0 Jun 20 18:23:11.108711 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jun 20 18:23:11.108728 kernel: Policy zone: Normal Jun 20 18:23:11.108748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:23:11.108764 kernel: software IO TLB: area num 2. Jun 20 18:23:11.108781 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 20 18:23:11.108797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:23:11.108814 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:23:11.108831 kernel: rcu: RCU event tracing is enabled. Jun 20 18:23:11.110921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:23:11.110942 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:23:11.110960 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:23:11.110991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:23:11.111014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:23:11.111031 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:23:11.111058 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:23:11.111075 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:23:11.111092 kernel: GICv3: 96 SPIs implemented Jun 20 18:23:11.111109 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:23:11.111125 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:23:11.111141 kernel: GICv3: GICv3 features: 16 PPIs Jun 20 18:23:11.111158 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jun 20 18:23:11.111174 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 20 18:23:11.111191 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 20 18:23:11.111207 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 18:23:11.111224 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jun 20 18:23:11.111245 kernel: GICv3: using LPI property table @0x00000004000e0000 Jun 20 18:23:11.111262 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 20 18:23:11.111278 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jun 20 18:23:11.111294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:23:11.111311 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 20 18:23:11.111327 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 20 18:23:11.111344 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 20 18:23:11.111360 kernel: Console: colour dummy device 80x25 Jun 20 18:23:11.111377 kernel: printk: legacy console [tty1] enabled Jun 20 18:23:11.111394 kernel: ACPI: Core revision 20240827 Jun 20 18:23:11.111411 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 20 18:23:11.111433 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:23:11.111449 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 18:23:11.111466 kernel: landlock: Up and running. Jun 20 18:23:11.111483 kernel: SELinux: Initializing. Jun 20 18:23:11.111499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:23:11.111516 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:23:11.111533 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:23:11.111550 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:23:11.111567 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 18:23:11.111588 kernel: Remapping and enabling EFI services. Jun 20 18:23:11.111604 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:23:11.111621 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:23:11.111637 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 20 18:23:11.111654 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jun 20 18:23:11.111671 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 20 18:23:11.111687 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:23:11.111704 kernel: SMP: Total of 2 processors activated. Jun 20 18:23:11.111720 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:23:11.111741 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:23:11.111768 kernel: CPU features: detected: 32-bit EL1 Support Jun 20 18:23:11.111786 kernel: CPU features: detected: CRC32 instructions Jun 20 18:23:11.111807 kernel: alternatives: applying system-wide alternatives Jun 20 18:23:11.111825 kernel: Memory: 3813536K/4030464K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) Jun 20 18:23:11.113886 kernel: devtmpfs: initialized Jun 20 18:23:11.113942 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:23:11.113963 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:23:11.113990 kernel: 17024 pages in range for non-PLT usage Jun 20 18:23:11.114009 kernel: 508544 pages in range for PLT usage Jun 20 18:23:11.114026 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:23:11.114043 kernel: SMBIOS 3.0.0 present. Jun 20 18:23:11.114061 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 20 18:23:11.114079 kernel: DMI: Memory slots populated: 0/0 Jun 20 18:23:11.114097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:23:11.114115 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:23:11.114133 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:23:11.114155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:23:11.114173 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:23:11.114190 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Jun 20 18:23:11.114207 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:23:11.114225 kernel: cpuidle: using governor menu Jun 20 18:23:11.114242 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:23:11.114260 kernel: ASID allocator initialised with 65536 entries Jun 20 18:23:11.114277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:23:11.114298 kernel: Serial: AMBA PL011 UART driver Jun 20 18:23:11.114316 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:23:11.114334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:23:11.114351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:23:11.114368 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:23:11.114386 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:23:11.114403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:23:11.114420 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:23:11.114438 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:23:11.114459 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:23:11.114477 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:23:11.114494 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:23:11.114511 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:23:11.114528 kernel: ACPI: Interpreter enabled Jun 20 18:23:11.114546 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:23:11.114563 kernel: ACPI: MCFG table detected, 1 entries Jun 20 18:23:11.114580 kernel: ACPI: CPU0 has been hot-added Jun 20 18:23:11.114598 kernel: ACPI: CPU1 has been hot-added Jun 20 18:23:11.114615 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 20 18:23:11.119031 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:23:11.119271 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 20 18:23:11.119452 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 20 18:23:11.119631 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 20 18:23:11.119808 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 20 18:23:11.119832 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 20 18:23:11.119874 kernel: acpiphp: Slot [1] registered Jun 20 18:23:11.119902 kernel: acpiphp: Slot [2] registered Jun 20 18:23:11.119920 kernel: acpiphp: Slot [3] registered Jun 20 18:23:11.119938 kernel: acpiphp: Slot [4] registered Jun 20 18:23:11.119956 kernel: acpiphp: Slot [5] registered Jun 20 18:23:11.119973 kernel: acpiphp: Slot [6] registered Jun 20 18:23:11.119990 kernel: acpiphp: Slot [7] registered Jun 20 18:23:11.120008 kernel: acpiphp: Slot [8] registered Jun 20 18:23:11.120025 kernel: acpiphp: Slot [9] registered Jun 20 18:23:11.120042 kernel: acpiphp: Slot [10] registered Jun 20 18:23:11.120064 kernel: acpiphp: Slot [11] registered Jun 20 18:23:11.120081 kernel: acpiphp: Slot [12] registered Jun 20 18:23:11.120099 kernel: acpiphp: Slot [13] registered Jun 20 18:23:11.120116 kernel: acpiphp: Slot [14] registered Jun 20 18:23:11.120133 kernel: acpiphp: Slot [15] registered Jun 20 18:23:11.120150 kernel: acpiphp: Slot [16] registered Jun 20 18:23:11.120167 kernel: acpiphp: Slot [17] registered Jun 20 18:23:11.120185 kernel: acpiphp: Slot [18] registered Jun 20 18:23:11.120202 kernel: acpiphp: Slot [19] registered Jun 20 18:23:11.120219 kernel: acpiphp: Slot [20] registered Jun 20 18:23:11.120240 kernel: acpiphp: Slot [21] registered Jun 20 18:23:11.120258 kernel: acpiphp: Slot [22] registered Jun 20 18:23:11.120275 kernel: acpiphp: Slot [23] registered Jun 20 18:23:11.120292 kernel: acpiphp: Slot [24] registered Jun 20 18:23:11.120310 kernel: acpiphp: Slot [25] registered Jun 20 18:23:11.120327 kernel: acpiphp: Slot [26] registered Jun 20 18:23:11.120345 kernel: acpiphp: Slot [27] registered Jun 20 18:23:11.120362 kernel: acpiphp: Slot [28] registered Jun 20 18:23:11.120379 kernel: acpiphp: Slot [29] registered Jun 20 18:23:11.120400 kernel: acpiphp: Slot [30] registered Jun 20 18:23:11.120418 kernel: acpiphp: Slot [31] registered Jun 20 18:23:11.120435 kernel: PCI host bridge to bus 0000:00 Jun 20 18:23:11.120625 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 20 18:23:11.120792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 20 18:23:11.126080 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 20 18:23:11.126271 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 20 18:23:11.126502 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jun 20 18:23:11.126734 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jun 20 18:23:11.126998 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jun 20 18:23:11.127204 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jun 20 18:23:11.127394 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jun 20 18:23:11.127579 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:23:11.127778 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jun 20 18:23:11.130137 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jun 20 18:23:11.130352 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jun 20 18:23:11.130541 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jun 20 18:23:11.130725 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:23:11.130947 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jun 20 18:23:11.131141 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jun 20 18:23:11.131331 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jun 20 18:23:11.131527 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jun 20 18:23:11.131715 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jun 20 18:23:11.135388 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 20 18:23:11.135578 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 20 18:23:11.135745 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 20 18:23:11.135770 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 20 18:23:11.135790 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 20 18:23:11.135818 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 20 18:23:11.135868 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 20 18:23:11.135890 kernel: iommu: Default domain type: Translated Jun 20 18:23:11.135909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:23:11.135927 kernel: efivars: Registered efivars operations Jun 20 18:23:11.135946 kernel: vgaarb: loaded Jun 20 18:23:11.135965 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:23:11.135983 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:23:11.136001 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:23:11.136027 kernel: pnp: PnP ACPI init Jun 20 18:23:11.136272 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 20 18:23:11.136302 kernel: pnp: PnP ACPI: found 1 devices Jun 20 18:23:11.136321 kernel: NET: Registered PF_INET protocol family Jun 20 18:23:11.136338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:23:11.136357 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:23:11.136375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:23:11.136393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:23:11.136417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:23:11.136435 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:23:11.136453 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:23:11.136471 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:23:11.136489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:23:11.136506 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:23:11.136523 kernel: kvm [1]: HYP mode not available Jun 20 18:23:11.136541 kernel: Initialise system trusted keyrings Jun 20 18:23:11.136558 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:23:11.136580 kernel: Key type asymmetric registered Jun 20 18:23:11.136598 kernel: Asymmetric key parser 'x509' registered Jun 20 18:23:11.136615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 18:23:11.136633 kernel: io scheduler mq-deadline registered Jun 20 18:23:11.136651 kernel: io scheduler kyber registered Jun 20 18:23:11.136668 kernel: io scheduler bfq registered Jun 20 18:23:11.138249 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 20 18:23:11.138294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 18:23:11.138323 kernel: ACPI: button: Power Button [PWRB] Jun 20 18:23:11.138342 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 20 18:23:11.138360 kernel: ACPI: button: Sleep Button [SLPB] Jun 20 18:23:11.138378 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:23:11.138396 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 20 18:23:11.138608 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 20 18:23:11.138634 kernel: printk: legacy console [ttyS0] disabled Jun 20 18:23:11.138653 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 20 18:23:11.138671 kernel: printk: legacy console [ttyS0] enabled Jun 20 18:23:11.138693 kernel: printk: legacy bootconsole [uart0] disabled Jun 20 18:23:11.138711 kernel: thunder_xcv, ver 1.0 Jun 20 18:23:11.138729 kernel: thunder_bgx, ver 1.0 Jun 20 18:23:11.138746 kernel: nicpf, ver 1.0 Jun 20 18:23:11.138763 kernel: nicvf, ver 1.0 Jun 20 18:23:11.139002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:23:11.139178 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:23:10 UTC (1750443790) Jun 20 18:23:11.139203 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:23:11.139227 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jun 20 18:23:11.139245 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:23:11.139262 kernel: watchdog: NMI not fully supported Jun 20 18:23:11.139280 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:23:11.139297 kernel: Segment Routing with IPv6 Jun 20 18:23:11.139314 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:23:11.139331 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:23:11.139349 kernel: Key type dns_resolver registered Jun 20 18:23:11.139366 kernel: registered taskstats version 1 Jun 20 18:23:11.139383 kernel: Loading compiled-in X.509 certificates Jun 20 18:23:11.139405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 18:23:11.139422 kernel: Demotion targets for Node 0: null Jun 20 18:23:11.139440 kernel: Key type .fscrypt registered Jun 20 18:23:11.139457 kernel: Key type fscrypt-provisioning registered Jun 20 18:23:11.139474 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:23:11.139491 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:23:11.139509 kernel: ima: No architecture policies found Jun 20 18:23:11.139527 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:23:11.139544 kernel: clk: Disabling unused clocks Jun 20 18:23:11.139565 kernel: PM: genpd: Disabling unused power domains Jun 20 18:23:11.139583 kernel: Warning: unable to open an initial console. Jun 20 18:23:11.139600 kernel: Freeing unused kernel memory: 39424K Jun 20 18:23:11.139617 kernel: Run /init as init process Jun 20 18:23:11.139634 kernel: with arguments: Jun 20 18:23:11.139652 kernel: /init Jun 20 18:23:11.139669 kernel: with environment: Jun 20 18:23:11.139686 kernel: HOME=/ Jun 20 18:23:11.139703 kernel: TERM=linux Jun 20 18:23:11.139724 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:23:11.139743 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:23:11.139766 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:23:11.139786 systemd[1]: Detected virtualization amazon. Jun 20 18:23:11.139804 systemd[1]: Detected architecture arm64. Jun 20 18:23:11.139822 systemd[1]: Running in initrd. Jun 20 18:23:11.140694 systemd[1]: No hostname configured, using default hostname. Jun 20 18:23:11.140732 systemd[1]: Hostname set to . Jun 20 18:23:11.140753 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:23:11.140772 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:23:11.140791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:11.140811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:11.140831 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:23:11.140982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:23:11.141003 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:23:11.141031 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:23:11.141053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:23:11.141073 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:23:11.141094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:11.141113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:11.141132 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:23:11.141152 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:23:11.141176 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:23:11.141196 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:23:11.141215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:23:11.141235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:23:11.141254 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:23:11.141273 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:23:11.141293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:11.141312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:11.141331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:11.141355 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:23:11.141374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:23:11.141393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:23:11.141413 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:23:11.141432 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 18:23:11.141452 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:23:11.141471 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:23:11.141490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:23:11.141513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:11.141533 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:23:11.141553 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:11.141572 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:23:11.141592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:23:11.141656 systemd-journald[258]: Collecting audit messages is disabled. Jun 20 18:23:11.141700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:11.141720 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:23:11.141744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:23:11.141764 kernel: Bridge firewalling registered Jun 20 18:23:11.141800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:23:11.141820 systemd-journald[258]: Journal started Jun 20 18:23:11.141890 systemd-journald[258]: Runtime Journal (/run/log/journal/ec272ccf48053908ce9c752b38cef2da) is 8M, max 75.3M, 67.3M free. Jun 20 18:23:11.080243 systemd-modules-load[260]: Inserted module 'overlay' Jun 20 18:23:11.152212 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:23:11.139877 systemd-modules-load[260]: Inserted module 'br_netfilter' Jun 20 18:23:11.159922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:11.169256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:23:11.179936 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:23:11.182137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:23:11.213939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:23:11.227216 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:23:11.232024 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 18:23:11.237292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:11.248397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:11.267728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:11.281562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:23:11.302394 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:23:11.389153 systemd-resolved[300]: Positive Trust Anchors: Jun 20 18:23:11.389180 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:23:11.389241 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:23:11.443874 kernel: SCSI subsystem initialized Jun 20 18:23:11.451871 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:23:11.463886 kernel: iscsi: registered transport (tcp) Jun 20 18:23:11.485582 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:23:11.485665 kernel: QLogic iSCSI HBA Driver Jun 20 18:23:11.519014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:23:11.561131 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:11.570379 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:23:11.646878 kernel: random: crng init done Jun 20 18:23:11.647235 systemd-resolved[300]: Defaulting to hostname 'linux'. Jun 20 18:23:11.650923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:23:11.657212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:11.680058 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:23:11.687061 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:23:11.784902 kernel: raid6: neonx8 gen() 6480 MB/s Jun 20 18:23:11.801870 kernel: raid6: neonx4 gen() 6440 MB/s Jun 20 18:23:11.818868 kernel: raid6: neonx2 gen() 5364 MB/s Jun 20 18:23:11.835868 kernel: raid6: neonx1 gen() 3931 MB/s Jun 20 18:23:11.852867 kernel: raid6: int64x8 gen() 3634 MB/s Jun 20 18:23:11.869868 kernel: raid6: int64x4 gen() 3678 MB/s Jun 20 18:23:11.886868 kernel: raid6: int64x2 gen() 3562 MB/s Jun 20 18:23:11.904747 kernel: raid6: int64x1 gen() 2767 MB/s Jun 20 18:23:11.904784 kernel: raid6: using algorithm neonx8 gen() 6480 MB/s Jun 20 18:23:11.922765 kernel: raid6: .... xor() 4757 MB/s, rmw enabled Jun 20 18:23:11.922804 kernel: raid6: using neon recovery algorithm Jun 20 18:23:11.931094 kernel: xor: measuring software checksum speed Jun 20 18:23:11.931146 kernel: 8regs : 12936 MB/sec Jun 20 18:23:11.932212 kernel: 32regs : 13042 MB/sec Jun 20 18:23:11.933488 kernel: arm64_neon : 9074 MB/sec Jun 20 18:23:11.933519 kernel: xor: using function: 32regs (13042 MB/sec) Jun 20 18:23:12.025874 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:23:12.036735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:23:12.046502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:12.097864 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jun 20 18:23:12.110026 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:12.118058 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:23:12.160795 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Jun 20 18:23:12.205321 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:23:12.210185 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:23:12.335424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:12.357675 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:23:12.513901 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 20 18:23:12.518775 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 20 18:23:12.526162 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 18:23:12.526516 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 20 18:23:12.526545 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 18:23:12.526771 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 18:23:12.527388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:23:12.529040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:12.536247 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:12.544874 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c2:5f:e3:a8:a7 Jun 20 18:23:12.545176 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 18:23:12.546881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:12.552367 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:12.563240 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:23:12.563309 kernel: GPT:9289727 != 16777215 Jun 20 18:23:12.563333 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:23:12.563357 kernel: GPT:9289727 != 16777215 Jun 20 18:23:12.563379 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:23:12.563402 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:12.570037 (udev-worker)[567]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:12.601649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:12.619891 kernel: nvme nvme0: using unchecked data buffer Jun 20 18:23:12.728081 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 18:23:12.794671 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 18:23:12.802064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 18:23:12.810447 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:23:12.867608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 18:23:12.892874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:23:12.896207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:23:12.912993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:12.935568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:23:12.941621 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:23:12.951099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:23:12.971344 disk-uuid[687]: Primary Header is updated. Jun 20 18:23:12.971344 disk-uuid[687]: Secondary Entries is updated. Jun 20 18:23:12.971344 disk-uuid[687]: Secondary Header is updated. Jun 20 18:23:12.982988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:12.991320 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:23:12.998882 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:13.998305 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:14.000526 disk-uuid[688]: The operation has completed successfully. Jun 20 18:23:14.182560 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:23:14.183430 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:23:14.291026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:23:14.312555 sh[955]: Success Jun 20 18:23:14.339277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:23:14.339353 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:23:14.340171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 18:23:14.351866 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 18:23:14.474724 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:23:14.482207 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:23:14.506319 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:23:14.538904 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 18:23:14.541912 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (979) Jun 20 18:23:14.546284 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 18:23:14.546341 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:14.546367 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 18:23:14.584656 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:23:14.589057 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:23:14.592131 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:23:14.593414 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:23:14.610729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:23:14.679911 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1026) Jun 20 18:23:14.685926 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:14.685996 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:14.687386 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:14.700908 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:14.704405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:23:14.710445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:23:14.799897 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:23:14.808022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:23:14.877994 systemd-networkd[1150]: lo: Link UP Jun 20 18:23:14.879985 systemd-networkd[1150]: lo: Gained carrier Jun 20 18:23:14.882477 systemd-networkd[1150]: Enumeration completed Jun 20 18:23:14.884418 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:23:14.885266 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:14.885274 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:23:14.892104 systemd[1]: Reached target network.target - Network. Jun 20 18:23:14.895244 systemd-networkd[1150]: eth0: Link UP Jun 20 18:23:14.895251 systemd-networkd[1150]: eth0: Gained carrier Jun 20 18:23:14.895270 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:14.914939 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.31.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:23:14.980913 ignition[1085]: Ignition 2.21.0 Jun 20 18:23:14.981699 ignition[1085]: Stage: fetch-offline Jun 20 18:23:14.982124 ignition[1085]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:14.982147 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:14.989601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:23:14.982888 ignition[1085]: Ignition finished successfully Jun 20 18:23:15.001148 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:23:15.042066 ignition[1161]: Ignition 2.21.0 Jun 20 18:23:15.042888 ignition[1161]: Stage: fetch Jun 20 18:23:15.043374 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:15.043398 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:15.043552 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:15.060288 ignition[1161]: PUT result: OK Jun 20 18:23:15.064082 ignition[1161]: parsed url from cmdline: "" Jun 20 18:23:15.064213 ignition[1161]: no config URL provided Jun 20 18:23:15.064232 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:23:15.064728 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:23:15.064778 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:15.069307 ignition[1161]: PUT result: OK Jun 20 18:23:15.069437 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 18:23:15.083956 ignition[1161]: GET result: OK Jun 20 18:23:15.084140 ignition[1161]: parsing config with SHA512: b1b2d1e45ce2047826d95a339ec538c05769633ae65439290dec5e231f4f824fa63983196fd6fc9eea5b7d63d25087cd8ae5b0866a6b201ab2e75bc2fbece244 Jun 20 18:23:15.093544 unknown[1161]: fetched base config from "system" Jun 20 18:23:15.093577 unknown[1161]: fetched base config from "system" Jun 20 18:23:15.093590 unknown[1161]: fetched user config from "aws" Jun 20 18:23:15.096264 ignition[1161]: fetch: fetch complete Jun 20 18:23:15.096280 ignition[1161]: fetch: fetch passed Jun 20 18:23:15.103269 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:23:15.096729 ignition[1161]: Ignition finished successfully Jun 20 18:23:15.110778 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:23:15.155403 ignition[1167]: Ignition 2.21.0 Jun 20 18:23:15.155758 ignition[1167]: Stage: kargs Jun 20 18:23:15.156901 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:15.156925 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:15.157073 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:15.159400 ignition[1167]: PUT result: OK Jun 20 18:23:15.173349 ignition[1167]: kargs: kargs passed Jun 20 18:23:15.175506 ignition[1167]: Ignition finished successfully Jun 20 18:23:15.180976 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:23:15.189059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:23:15.228795 ignition[1173]: Ignition 2.21.0 Jun 20 18:23:15.229652 ignition[1173]: Stage: disks Jun 20 18:23:15.230226 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:15.230249 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:15.230447 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:15.235262 ignition[1173]: PUT result: OK Jun 20 18:23:15.244125 ignition[1173]: disks: disks passed Jun 20 18:23:15.244228 ignition[1173]: Ignition finished successfully Jun 20 18:23:15.249932 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:23:15.252202 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:23:15.252287 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:23:15.252622 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:23:15.254055 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:23:15.254395 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:23:15.259035 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:23:15.323929 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 18:23:15.330796 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:23:15.339470 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:23:15.475866 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 18:23:15.477240 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:23:15.481396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:23:15.487055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:23:15.502988 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:23:15.518607 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 18:23:15.518716 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:23:15.518767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:23:15.539716 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:23:15.545888 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jun 20 18:23:15.551341 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:15.551377 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:15.552911 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:15.556955 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:23:15.566294 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:23:15.680864 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:23:15.690278 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:23:15.699969 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:23:15.709522 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:23:15.863005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:23:15.869975 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:23:15.886768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:23:15.902432 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:23:15.905901 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:15.951135 ignition[1314]: INFO : Ignition 2.21.0 Jun 20 18:23:15.954403 ignition[1314]: INFO : Stage: mount Jun 20 18:23:15.954403 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:15.954403 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:15.954403 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:15.956590 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:23:15.965526 ignition[1314]: INFO : PUT result: OK Jun 20 18:23:15.972394 ignition[1314]: INFO : mount: mount passed Jun 20 18:23:15.975016 ignition[1314]: INFO : Ignition finished successfully Jun 20 18:23:15.978752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:23:15.984928 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:23:16.018156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:23:16.051873 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Jun 20 18:23:16.056336 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:16.056388 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:16.056414 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:16.066338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:23:16.104413 ignition[1343]: INFO : Ignition 2.21.0 Jun 20 18:23:16.104413 ignition[1343]: INFO : Stage: files Jun 20 18:23:16.108332 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:16.108332 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:16.108332 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:16.108332 ignition[1343]: INFO : PUT result: OK Jun 20 18:23:16.121727 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:23:16.125553 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:23:16.125553 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:23:16.133149 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:23:16.133149 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:23:16.139648 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:23:16.138047 unknown[1343]: wrote ssh authorized keys file for user: core Jun 20 18:23:16.146593 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:23:16.146593 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jun 20 18:23:16.287880 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:23:16.841007 systemd-networkd[1150]: eth0: Gained IPv6LL Jun 20 18:23:17.340895 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:23:17.340895 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:23:17.340895 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 18:23:17.872005 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:23:18.009691 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:23:18.009691 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:23:18.017269 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:23:18.042993 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jun 20 18:23:18.810330 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:23:19.158762 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:23:19.158762 ignition[1343]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:23:19.166432 ignition[1343]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:23:19.170287 ignition[1343]: INFO : files: files passed Jun 20 18:23:19.170287 ignition[1343]: INFO : Ignition finished successfully Jun 20 18:23:19.202471 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:23:19.209773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:23:19.215696 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:23:19.240915 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:23:19.241116 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:23:19.255944 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.255944 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.265355 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.276734 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:23:19.282937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:23:19.288451 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:23:19.376692 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:23:19.378584 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:23:19.385740 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:23:19.389084 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:23:19.395937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:23:19.398066 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:23:19.444989 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:23:19.452715 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:23:19.505084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:19.510578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:19.514093 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:23:19.514380 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:23:19.514602 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:23:19.525512 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:23:19.530854 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:23:19.536399 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:23:19.540567 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:23:19.557680 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:23:19.562527 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:23:19.578061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:23:19.579611 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:23:19.583776 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:23:19.588794 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:23:19.593003 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:23:19.597011 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:23:19.597259 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:23:19.604449 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:19.608700 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:19.612569 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:23:19.618211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:19.624059 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:23:19.624769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:23:19.632666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:23:19.632908 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:23:19.640921 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:23:19.641955 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:23:19.649415 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:23:19.654217 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:23:19.657087 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:19.670416 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:23:19.675010 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:23:19.676093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:19.684325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:23:19.685112 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:23:19.704049 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:23:19.706295 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:23:19.736903 ignition[1397]: INFO : Ignition 2.21.0 Jun 20 18:23:19.738915 ignition[1397]: INFO : Stage: umount Jun 20 18:23:19.741568 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:19.743781 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:19.746280 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:19.749584 ignition[1397]: INFO : PUT result: OK Jun 20 18:23:19.754452 ignition[1397]: INFO : umount: umount passed Jun 20 18:23:19.756423 ignition[1397]: INFO : Ignition finished successfully Jun 20 18:23:19.763026 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:23:19.763690 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:23:19.770261 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:23:19.770963 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:23:19.771044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:23:19.776288 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:23:19.777190 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:23:19.780005 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:23:19.780498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:23:19.788654 systemd[1]: Stopped target network.target - Network. Jun 20 18:23:19.790997 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:23:19.791092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:23:19.794054 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:23:19.796361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:23:19.801263 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:19.805071 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:23:19.812557 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:23:19.815462 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:23:19.815532 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:23:19.819311 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:23:19.819373 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:23:19.823283 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:23:19.823377 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:23:19.827144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:23:19.827222 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:23:19.837388 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:23:19.842017 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:23:19.856186 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:23:19.856358 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:23:19.860536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:23:19.860688 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:23:19.895716 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:23:19.900104 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:23:19.912526 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:23:19.914317 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:23:19.914532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:23:19.924765 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:23:19.926656 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 18:23:19.931986 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:23:19.932066 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:19.934134 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:23:19.944331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:23:19.944473 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:23:19.966248 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:23:19.966825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:19.975400 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:23:19.975988 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:19.982540 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:23:19.982647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:19.991816 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:19.997258 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:23:19.997380 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:20.018749 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:23:20.022912 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:23:20.027221 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:23:20.027482 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:20.033237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:23:20.033342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:20.041279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:23:20.041360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:20.041903 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:23:20.041992 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:23:20.042684 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:23:20.042761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:23:20.046717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:23:20.046808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:23:20.049141 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:23:20.081075 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 18:23:20.082393 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:20.090149 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:23:20.090238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:20.102005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:23:20.102092 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:20.115662 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 18:23:20.115782 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:23:20.115892 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:20.127660 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:23:20.127890 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:23:20.138687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:23:20.146191 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:23:20.182711 systemd[1]: Switching root. Jun 20 18:23:20.214892 systemd-journald[258]: Journal stopped Jun 20 18:23:22.205500 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jun 20 18:23:22.205628 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:23:22.205669 kernel: SELinux: policy capability open_perms=1 Jun 20 18:23:22.205704 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:23:22.205736 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:23:22.205763 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:23:22.205792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:23:22.223181 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:23:22.223234 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:23:22.223275 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 18:23:22.223304 kernel: audit: type=1403 audit(1750443800.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:23:22.223343 systemd[1]: Successfully loaded SELinux policy in 60.008ms. Jun 20 18:23:22.223406 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.570ms. Jun 20 18:23:22.223438 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:23:22.223469 systemd[1]: Detected virtualization amazon. Jun 20 18:23:22.223496 systemd[1]: Detected architecture arm64. Jun 20 18:23:22.223525 systemd[1]: Detected first boot. Jun 20 18:23:22.223555 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:23:22.223584 zram_generator::config[1440]: No configuration found. Jun 20 18:23:22.223616 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:23:22.223649 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:23:22.223678 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:23:22.223706 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:23:22.223736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:23:22.223766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:23:22.223798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:23:22.223828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:23:22.223960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:23:22.223992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:23:22.224029 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:23:22.224060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:23:22.224090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:23:22.224119 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:23:22.224147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:22.224176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:22.224204 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:23:22.224231 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:23:22.224261 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:23:22.224294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:23:22.224324 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:23:22.224365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:22.224395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:22.224423 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:23:22.224452 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:23:22.224481 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:23:22.224514 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:23:22.224544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:22.224575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:23:22.224603 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:23:22.224631 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:23:22.224661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:23:22.224690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:23:22.224722 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:23:22.224753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:22.224783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:22.224817 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:22.229882 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:23:22.229930 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:23:22.229960 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:23:22.229991 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:23:22.230019 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:23:22.230047 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:23:22.230075 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:23:22.230105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:23:22.230143 systemd[1]: Reached target machines.target - Containers. Jun 20 18:23:22.230172 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:23:22.230203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:22.230922 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:23:22.230961 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:23:22.230991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:23:22.231019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:23:22.231049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:23:22.231084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:23:22.231115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:23:22.231145 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:23:22.231175 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:23:22.231203 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:23:22.231231 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:23:22.231261 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:23:22.231290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:22.231323 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:23:22.231351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:23:22.243154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:23:22.243206 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:23:22.243237 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:23:22.243276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:23:22.243308 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:23:22.243337 systemd[1]: Stopped verity-setup.service. Jun 20 18:23:22.243368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:23:22.243399 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:23:22.243433 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:23:22.243465 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:23:22.243493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:23:22.243523 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:23:22.243551 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:22.243578 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:23:22.243605 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:23:22.243635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:23:22.243663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:23:22.243694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:23:22.243726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:23:22.243754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:22.243782 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:23:22.243814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:23:22.247105 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:23:22.247164 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:23:22.247195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:23:22.247224 kernel: loop: module loaded Jun 20 18:23:22.247260 kernel: fuse: init (API version 7.41) Jun 20 18:23:22.247288 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:23:22.247318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:22.247346 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:23:22.247375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:23:22.247403 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:23:22.247430 kernel: ACPI: bus type drm_connector registered Jun 20 18:23:22.247459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:23:22.247491 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:23:22.247524 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:23:22.247552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:23:22.247582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:23:22.247610 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:23:22.247641 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:23:22.247670 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:23:22.247746 systemd-journald[1526]: Collecting audit messages is disabled. Jun 20 18:23:22.247797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:23:22.247827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:22.248084 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:23:22.248119 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:23:22.248147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:23:22.248183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:23:22.248211 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:23:22.248242 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:23:22.248273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:23:22.248305 systemd-journald[1526]: Journal started Jun 20 18:23:22.248353 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec272ccf48053908ce9c752b38cef2da) is 8M, max 75.3M, 67.3M free. Jun 20 18:23:21.477393 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:23:22.270063 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:23:22.270133 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:23:22.270170 kernel: loop0: detected capacity change from 0 to 207008 Jun 20 18:23:21.493064 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 18:23:21.493905 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:23:22.261905 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:22.274486 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:23:22.319471 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec272ccf48053908ce9c752b38cef2da is 61.064ms for 933 entries. Jun 20 18:23:22.319471 systemd-journald[1526]: System Journal (/var/log/journal/ec272ccf48053908ce9c752b38cef2da) is 8M, max 195.6M, 187.6M free. Jun 20 18:23:22.414101 systemd-journald[1526]: Received client request to flush runtime journal. Jun 20 18:23:22.350543 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:23:22.421033 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:23:22.454739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:22.480491 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:23:22.486778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:23:22.498511 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:23:22.508041 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:23:22.531488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:23:22.564949 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:23:22.571864 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jun 20 18:23:22.571904 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jun 20 18:23:22.582922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:22.603920 kernel: loop1: detected capacity change from 0 to 61240 Jun 20 18:23:22.651884 kernel: loop2: detected capacity change from 0 to 138376 Jun 20 18:23:22.716929 kernel: loop3: detected capacity change from 0 to 107312 Jun 20 18:23:22.775876 kernel: loop4: detected capacity change from 0 to 207008 Jun 20 18:23:22.821865 kernel: loop5: detected capacity change from 0 to 61240 Jun 20 18:23:22.854876 kernel: loop6: detected capacity change from 0 to 138376 Jun 20 18:23:22.887881 kernel: loop7: detected capacity change from 0 to 107312 Jun 20 18:23:22.917153 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 18:23:22.918946 (sd-merge)[1599]: Merged extensions into '/usr'. Jun 20 18:23:22.928978 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:23:22.929009 systemd[1]: Reloading... Jun 20 18:23:23.086180 zram_generator::config[1626]: No configuration found. Jun 20 18:23:23.153326 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:23:23.309737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:23.504917 systemd[1]: Reloading finished in 575 ms. Jun 20 18:23:23.531927 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:23:23.535384 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:23:23.538569 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:23:23.556024 systemd[1]: Starting ensure-sysext.service... Jun 20 18:23:23.562138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:23:23.572801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:23.604390 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:23:23.604576 systemd[1]: Reloading... Jun 20 18:23:23.642655 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Jun 20 18:23:23.659752 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 18:23:23.660591 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 18:23:23.661346 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:23:23.662069 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:23:23.665876 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:23:23.666474 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jun 20 18:23:23.666625 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jun 20 18:23:23.682145 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:23:23.682175 systemd-tmpfiles[1680]: Skipping /boot Jun 20 18:23:23.712350 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:23:23.712382 systemd-tmpfiles[1680]: Skipping /boot Jun 20 18:23:23.850318 zram_generator::config[1723]: No configuration found. Jun 20 18:23:24.051175 (udev-worker)[1773]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:24.183383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:24.432712 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:23:24.433600 systemd[1]: Reloading finished in 828 ms. Jun 20 18:23:24.479579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:24.503346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:24.625256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:23:24.632278 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:23:24.635033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:24.639255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:23:24.645384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:23:24.653071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:23:24.655505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:24.655735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:24.658123 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:23:24.668897 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:23:24.675577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:23:24.681981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:23:24.692088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:24.692427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:24.692604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:24.702538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:24.776958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:23:24.779337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:24.779583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:24.781025 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:23:24.815526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:23:24.823273 systemd[1]: Finished ensure-sysext.service. Jun 20 18:23:24.845971 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:23:24.864252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:23:24.887376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:23:24.892645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:23:24.893107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:23:24.897000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:23:24.898673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:23:24.903919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:23:24.917710 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:23:24.919341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:23:24.945215 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:23:24.948813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:23:24.949131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:23:24.956352 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:23:24.966389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:23:24.975451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:24.984368 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:23:25.022604 augenrules[1936]: No rules Jun 20 18:23:25.028576 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:23:25.030207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:23:25.041533 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:23:25.077229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:23:25.093046 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:23:25.190499 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:23:25.249325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:25.326265 systemd-networkd[1900]: lo: Link UP Jun 20 18:23:25.326743 systemd-networkd[1900]: lo: Gained carrier Jun 20 18:23:25.329518 systemd-networkd[1900]: Enumeration completed Jun 20 18:23:25.329867 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:23:25.333138 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:25.335104 systemd-networkd[1900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:23:25.336901 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:23:25.342280 systemd-networkd[1900]: eth0: Link UP Jun 20 18:23:25.343264 systemd-networkd[1900]: eth0: Gained carrier Jun 20 18:23:25.343424 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:25.344366 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:23:25.353120 systemd-resolved[1901]: Positive Trust Anchors: Jun 20 18:23:25.354349 systemd-resolved[1901]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:23:25.354422 systemd-resolved[1901]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:23:25.359963 systemd-networkd[1900]: eth0: DHCPv4 address 172.31.31.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:23:25.367273 systemd-resolved[1901]: Defaulting to hostname 'linux'. Jun 20 18:23:25.372544 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:23:25.375185 systemd[1]: Reached target network.target - Network. Jun 20 18:23:25.378989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:25.382157 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:23:25.384578 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:23:25.391220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:23:25.394352 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:23:25.396789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:23:25.399426 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:23:25.402048 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:23:25.402106 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:23:25.404041 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:23:25.407614 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:23:25.412576 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:23:25.419420 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:23:25.422559 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:23:25.425353 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:23:25.432079 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:23:25.435201 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:23:25.440926 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:23:25.443983 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:23:25.448051 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:23:25.450936 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:23:25.453190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:25.453244 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:25.455532 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:23:25.462133 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:23:25.467155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:23:25.472988 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:23:25.485497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:23:25.493019 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:23:25.494663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:23:25.500735 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:23:25.514482 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 18:23:25.531029 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:23:25.541302 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 18:23:25.550292 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:23:25.555609 jq[1967]: false Jun 20 18:23:25.563234 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:23:25.583148 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:23:25.588496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:23:25.594324 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:23:25.601336 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:23:25.610128 extend-filesystems[1968]: Found /dev/nvme0n1p6 Jun 20 18:23:25.611576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:23:25.628904 extend-filesystems[1968]: Found /dev/nvme0n1p9 Jun 20 18:23:25.630969 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:23:25.639466 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:23:25.640973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:23:25.672655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:23:25.675036 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:23:25.697323 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 Jun 20 18:23:25.725528 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:23:25.737201 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 Jun 20 18:23:25.731866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:23:25.752052 extend-filesystems[2007]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 18:23:25.761422 tar[1989]: linux-arm64/LICENSE Jun 20 18:23:25.761422 tar[1989]: linux-arm64/helm Jun 20 18:23:25.777876 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 18:23:25.785263 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:50 UTC 2025 (1): Starting Jun 20 18:23:25.785334 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:50 UTC 2025 (1): Starting Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: ---------------------------------------------------- Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: corporation. Support and training for ntp-4 are Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: available at https://www.nwtime.org/support Jun 20 18:23:25.786028 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: ---------------------------------------------------- Jun 20 18:23:25.785353 ntpd[1970]: ---------------------------------------------------- Jun 20 18:23:25.785372 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:23:25.785390 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:23:25.785407 ntpd[1970]: corporation. Support and training for ntp-4 are Jun 20 18:23:25.785424 ntpd[1970]: available at https://www.nwtime.org/support Jun 20 18:23:25.785441 ntpd[1970]: ---------------------------------------------------- Jun 20 18:23:25.804080 jq[1984]: true Jun 20 18:23:25.818342 ntpd[1970]: proto: precision = 0.096 usec (-23) Jun 20 18:23:25.823300 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: proto: precision = 0.096 usec (-23) Jun 20 18:23:25.825573 ntpd[1970]: basedate set to 2025-06-08 Jun 20 18:23:25.827005 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: basedate set to 2025-06-08 Jun 20 18:23:25.827005 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: gps base set to 2025-06-08 (week 2370) Jun 20 18:23:25.825623 ntpd[1970]: gps base set to 2025-06-08 (week 2370) Jun 20 18:23:25.835294 update_engine[1983]: I20250620 18:23:25.835144 1983 main.cc:92] Flatcar Update Engine starting Jun 20 18:23:25.839108 (ntainerd)[2011]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:23:25.838713 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:23:25.840474 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:23:25.840474 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:23:25.838794 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:23:25.847820 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:23:25.851677 ntpd[1970]: Listen normally on 3 eth0 172.31.31.140:123 Jun 20 18:23:25.940747 update_engine[1983]: I20250620 18:23:25.907238 1983 update_check_scheduler.cc:74] Next update check in 4m34s Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listen normally on 3 eth0 172.31.31.140:123 Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listen normally on 4 lo [::1]:123 Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: bind(21) AF_INET6 fe80::4c2:5fff:fee3:a8a7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: unable to create socket on eth0 (5) for fe80::4c2:5fff:fee3:a8a7%2#123 Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: failed to init interface for address fe80::4c2:5fff:fee3:a8a7%2 Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:25.940809 ntpd[1970]: 20 Jun 18:23:25 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.895 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.912 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.913 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.916 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.918 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.918 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.922 INFO Fetch failed with 404: resource not found Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.923 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.923 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.927 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.930 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.930 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.932 INFO Fetch successful Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.932 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 18:23:25.941220 coreos-metadata[1964]: Jun 20 18:23:25.934 INFO Fetch successful Jun 20 18:23:25.857188 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:23:25.851744 ntpd[1970]: Listen normally on 4 lo [::1]:123 Jun 20 18:23:25.951970 jq[2018]: true Jun 20 18:23:25.866434 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:23:25.851822 ntpd[1970]: bind(21) AF_INET6 fe80::4c2:5fff:fee3:a8a7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:25.866485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:23:25.851943 ntpd[1970]: unable to create socket on eth0 (5) for fe80::4c2:5fff:fee3:a8a7%2#123 Jun 20 18:23:25.870072 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:23:25.851972 ntpd[1970]: failed to init interface for address fe80::4c2:5fff:fee3:a8a7%2 Jun 20 18:23:25.870120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:23:25.852036 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Jun 20 18:23:25.944708 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 18:23:25.856777 dbus-daemon[1965]: [system] SELinux support is enabled Jun 20 18:23:25.883708 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:25.883761 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:25.899076 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1900 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 18:23:25.959460 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:23:25.978824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:23:25.997682 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 18:23:26.023960 extend-filesystems[2007]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 18:23:26.023960 extend-filesystems[2007]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 18:23:26.023960 extend-filesystems[2007]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 18:23:26.076343 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 Jun 20 18:23:26.030784 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:23:26.035035 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:23:26.110984 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 18:23:26.211959 systemd-logind[1981]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 18:23:26.219189 systemd-logind[1981]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 20 18:23:26.227068 systemd-logind[1981]: New seat seat0. Jun 20 18:23:26.233395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:23:26.236391 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:23:26.257715 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:23:26.261382 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:23:26.303359 bash[2085]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:23:26.307964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:23:26.316527 systemd[1]: Starting sshkeys.service... Jun 20 18:23:26.409329 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:23:26.416397 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:23:26.531605 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:23:26.595344 containerd[2011]: time="2025-06-20T18:23:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 18:23:26.612982 containerd[2011]: time="2025-06-20T18:23:26.607472364Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 18:23:26.671165 coreos-metadata[2115]: Jun 20 18:23:26.668 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:23:26.674866 coreos-metadata[2115]: Jun 20 18:23:26.672 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 18:23:26.675396 coreos-metadata[2115]: Jun 20 18:23:26.675 INFO Fetch successful Jun 20 18:23:26.675396 coreos-metadata[2115]: Jun 20 18:23:26.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 18:23:26.679973 coreos-metadata[2115]: Jun 20 18:23:26.679 INFO Fetch successful Jun 20 18:23:26.684373 unknown[2115]: wrote ssh authorized keys file for user: core Jun 20 18:23:26.695944 containerd[2011]: time="2025-06-20T18:23:26.694786524Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.848µs" Jun 20 18:23:26.698668 containerd[2011]: time="2025-06-20T18:23:26.698600628Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 18:23:26.698887 containerd[2011]: time="2025-06-20T18:23:26.698829372Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 18:23:26.699347 containerd[2011]: time="2025-06-20T18:23:26.699304464Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 18:23:26.699922 containerd[2011]: time="2025-06-20T18:23:26.699891504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 18:23:26.700057 containerd[2011]: time="2025-06-20T18:23:26.700029876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:26.700263 containerd[2011]: time="2025-06-20T18:23:26.700231440Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:26.701667 containerd[2011]: time="2025-06-20T18:23:26.701568756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:26.702468 containerd[2011]: time="2025-06-20T18:23:26.702414780Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.704892456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.704944896Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.704967252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.705183048Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.705632820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.705698604Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:26.706014 containerd[2011]: time="2025-06-20T18:23:26.705724512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 18:23:26.709950 containerd[2011]: time="2025-06-20T18:23:26.709557840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 18:23:26.710345 containerd[2011]: time="2025-06-20T18:23:26.710303232Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 18:23:26.711489 containerd[2011]: time="2025-06-20T18:23:26.711430740Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:23:26.725201 containerd[2011]: time="2025-06-20T18:23:26.724989660Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 18:23:26.725201 containerd[2011]: time="2025-06-20T18:23:26.725125776Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 18:23:26.725201 containerd[2011]: time="2025-06-20T18:23:26.725158476Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 18:23:26.726885 containerd[2011]: time="2025-06-20T18:23:26.725415144Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727027908Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727122816Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727179720Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727214196Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727270536Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727308156Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727356840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 18:23:26.727702 containerd[2011]: time="2025-06-20T18:23:26.727393776Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 18:23:26.728298 containerd[2011]: time="2025-06-20T18:23:26.728164404Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 18:23:26.728298 containerd[2011]: time="2025-06-20T18:23:26.728247384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 18:23:26.728461 containerd[2011]: time="2025-06-20T18:23:26.728432448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 18:23:26.728589 containerd[2011]: time="2025-06-20T18:23:26.728561484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 18:23:26.728714 containerd[2011]: time="2025-06-20T18:23:26.728687076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 18:23:26.728854 containerd[2011]: time="2025-06-20T18:23:26.728809380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 18:23:26.728985 containerd[2011]: time="2025-06-20T18:23:26.728956896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 18:23:26.729105 containerd[2011]: time="2025-06-20T18:23:26.729076968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.730880796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.730920552Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.730975380Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.731176956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.731245248Z" level=info msg="Start snapshots syncer" Jun 20 18:23:26.731958 containerd[2011]: time="2025-06-20T18:23:26.731333292Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 18:23:26.732457 containerd[2011]: time="2025-06-20T18:23:26.731921292Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 18:23:26.732457 containerd[2011]: time="2025-06-20T18:23:26.732404352Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.734920224Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735523008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735605052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735634668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735693696Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735745680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735777276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735805656Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735925800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.735956304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.736005864Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 18:23:26.736188 containerd[2011]: time="2025-06-20T18:23:26.736139880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:26.736918 containerd[2011]: time="2025-06-20T18:23:26.736755420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:26.736918 containerd[2011]: time="2025-06-20T18:23:26.736792188Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:26.737133 containerd[2011]: time="2025-06-20T18:23:26.736877196Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:26.737133 containerd[2011]: time="2025-06-20T18:23:26.737064288Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737107512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737461848Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737520312Z" level=info msg="runtime interface created" Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737535156Z" level=info msg="created NRI interface" Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737556060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737584800Z" level=info msg="Connect containerd service" Jun 20 18:23:26.738692 containerd[2011]: time="2025-06-20T18:23:26.737650572Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:23:26.745302 containerd[2011]: time="2025-06-20T18:23:26.744326736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:23:26.787020 ntpd[1970]: bind(24) AF_INET6 fe80::4c2:5fff:fee3:a8a7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:26.791784 ntpd[1970]: 20 Jun 18:23:26 ntpd[1970]: bind(24) AF_INET6 fe80::4c2:5fff:fee3:a8a7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:26.791784 ntpd[1970]: 20 Jun 18:23:26 ntpd[1970]: unable to create socket on eth0 (6) for fe80::4c2:5fff:fee3:a8a7%2#123 Jun 20 18:23:26.791784 ntpd[1970]: 20 Jun 18:23:26 ntpd[1970]: failed to init interface for address fe80::4c2:5fff:fee3:a8a7%2 Jun 20 18:23:26.787078 ntpd[1970]: unable to create socket on eth0 (6) for fe80::4c2:5fff:fee3:a8a7%2#123 Jun 20 18:23:26.787103 ntpd[1970]: failed to init interface for address fe80::4c2:5fff:fee3:a8a7%2 Jun 20 18:23:26.812608 update-ssh-keys[2149]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:23:26.819397 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:23:26.829144 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 18:23:26.834895 systemd[1]: Finished sshkeys.service. Jun 20 18:23:26.859144 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 18:23:26.869482 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 18:23:26.880654 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 18:23:27.236542 containerd[2011]: time="2025-06-20T18:23:27.236453135Z" level=info msg="Start subscribing containerd event" Jun 20 18:23:27.236779 containerd[2011]: time="2025-06-20T18:23:27.236553395Z" level=info msg="Start recovering state" Jun 20 18:23:27.236779 containerd[2011]: time="2025-06-20T18:23:27.236686427Z" level=info msg="Start event monitor" Jun 20 18:23:27.236779 containerd[2011]: time="2025-06-20T18:23:27.236713103Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:23:27.239897 containerd[2011]: time="2025-06-20T18:23:27.239398835Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:23:27.245284 containerd[2011]: time="2025-06-20T18:23:27.239809727Z" level=info msg="Start streaming server" Jun 20 18:23:27.245284 containerd[2011]: time="2025-06-20T18:23:27.245282747Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 18:23:27.245458 containerd[2011]: time="2025-06-20T18:23:27.245306627Z" level=info msg="runtime interface starting up..." Jun 20 18:23:27.245458 containerd[2011]: time="2025-06-20T18:23:27.245333147Z" level=info msg="starting plugins..." Jun 20 18:23:27.245458 containerd[2011]: time="2025-06-20T18:23:27.245373491Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 18:23:27.246886 containerd[2011]: time="2025-06-20T18:23:27.245700779Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:23:27.245963 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:23:27.253498 containerd[2011]: time="2025-06-20T18:23:27.253437503Z" level=info msg="containerd successfully booted in 0.662176s" Jun 20 18:23:27.260896 polkitd[2159]: Started polkitd version 126 Jun 20 18:23:27.273060 systemd-networkd[1900]: eth0: Gained IPv6LL Jun 20 18:23:27.277093 polkitd[2159]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 18:23:27.277742 polkitd[2159]: Loading rules from directory /run/polkit-1/rules.d Jun 20 18:23:27.277865 polkitd[2159]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 18:23:27.278375 polkitd[2159]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 20 18:23:27.278443 polkitd[2159]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 18:23:27.278523 polkitd[2159]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 18:23:27.282951 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:23:27.287447 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:23:27.293215 polkitd[2159]: Finished loading, compiling and executing 2 rules Jun 20 18:23:27.293795 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 18:23:27.299747 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 18:23:27.305320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:27.312288 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:23:27.315065 polkitd[2159]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 18:23:27.316352 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 18:23:27.387327 systemd-hostnamed[2026]: Hostname set to (transient) Jun 20 18:23:27.387600 systemd-resolved[1901]: System hostname changed to 'ip-172-31-31-140'. Jun 20 18:23:27.421914 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:23:27.523648 amazon-ssm-agent[2185]: Initializing new seelog logger Jun 20 18:23:27.525140 amazon-ssm-agent[2185]: New Seelog Logger Creation Complete Jun 20 18:23:27.525140 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.525140 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.525140 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 processing appconfig overrides Jun 20 18:23:27.528523 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.528523 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.528523 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 processing appconfig overrides Jun 20 18:23:27.528711 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.528711 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.528787 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 processing appconfig overrides Jun 20 18:23:27.529561 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5281 INFO Proxy environment variables: Jun 20 18:23:27.543867 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.543867 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:27.543867 amazon-ssm-agent[2185]: 2025/06/20 18:23:27 processing appconfig overrides Jun 20 18:23:27.633357 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5281 INFO https_proxy: Jun 20 18:23:27.732482 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5281 INFO http_proxy: Jun 20 18:23:27.831047 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5281 INFO no_proxy: Jun 20 18:23:27.929859 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5283 INFO Checking if agent identity type OnPrem can be assumed Jun 20 18:23:28.028687 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.5284 INFO Checking if agent identity type EC2 can be assumed Jun 20 18:23:28.098760 tar[1989]: linux-arm64/README.md Jun 20 18:23:28.127963 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.6968 INFO Agent will take identity from EC2 Jun 20 18:23:28.138939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:23:28.199769 amazon-ssm-agent[2185]: 2025/06/20 18:23:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.199769 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.199769 amazon-ssm-agent[2185]: 2025/06/20 18:23:28 processing appconfig overrides Jun 20 18:23:28.227518 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7090 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jun 20 18:23:28.246186 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7090 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 20 18:23:28.246186 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7090 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7090 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7090 INFO [Registrar] Starting registrar module Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7142 INFO [EC2Identity] Checking disk for registration info Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7143 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:27.7143 INFO [EC2Identity] Generating registration keypair Jun 20 18:23:28.246370 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1549 INFO [EC2Identity] Checking write access before registering Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1556 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1990 INFO [EC2Identity] EC2 registration was successful. Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1990 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1991 INFO [CredentialRefresher] credentialRefresher has started Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.1992 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.2456 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 18:23:28.247852 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.2460 INFO [CredentialRefresher] Credentials ready Jun 20 18:23:28.326054 amazon-ssm-agent[2185]: 2025-06-20 18:23:28.2470 INFO [CredentialRefresher] Next credential rotation will be in 29.9999765259 minutes Jun 20 18:23:28.992548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:29.013353 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:29.281234 amazon-ssm-agent[2185]: 2025-06-20 18:23:29.2809 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 18:23:29.383707 amazon-ssm-agent[2185]: 2025-06-20 18:23:29.2878 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2221) started Jun 20 18:23:29.484862 amazon-ssm-agent[2185]: 2025-06-20 18:23:29.2879 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 18:23:29.786055 ntpd[1970]: Listen normally on 7 eth0 [fe80::4c2:5fff:fee3:a8a7%2]:123 Jun 20 18:23:29.786616 ntpd[1970]: 20 Jun 18:23:29 ntpd[1970]: Listen normally on 7 eth0 [fe80::4c2:5fff:fee3:a8a7%2]:123 Jun 20 18:23:29.992762 kubelet[2214]: E0620 18:23:29.992676 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:29.997285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:29.997589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:29.998571 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 253.5M memory peak. Jun 20 18:23:30.751775 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:23:30.790264 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:23:30.797231 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:23:30.804291 systemd[1]: Started sshd@0-172.31.31.140:22-139.178.68.195:38202.service - OpenSSH per-connection server daemon (139.178.68.195:38202). Jun 20 18:23:30.824586 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:23:30.825909 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:23:30.833248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:23:30.863545 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:23:30.871322 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:23:30.880398 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:23:30.883502 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:23:30.886119 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:23:30.892038 systemd[1]: Startup finished in 3.728s (kernel) + 9.807s (initrd) + 10.466s (userspace) = 24.002s. Jun 20 18:23:31.060610 sshd[2243]: Accepted publickey for core from 139.178.68.195 port 38202 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:31.064956 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:31.090923 systemd-logind[1981]: New session 1 of user core. Jun 20 18:23:31.092409 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:23:31.095574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:23:31.137666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:23:31.142827 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:23:31.162134 (systemd)[2258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:23:31.166878 systemd-logind[1981]: New session c1 of user core. Jun 20 18:23:31.470538 systemd[2258]: Queued start job for default target default.target. Jun 20 18:23:31.487730 systemd[2258]: Created slice app.slice - User Application Slice. Jun 20 18:23:31.488521 systemd[2258]: Reached target paths.target - Paths. Jun 20 18:23:31.488637 systemd[2258]: Reached target timers.target - Timers. Jun 20 18:23:31.492992 systemd[2258]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:23:31.523308 systemd[2258]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:23:31.523549 systemd[2258]: Reached target sockets.target - Sockets. Jun 20 18:23:31.523659 systemd[2258]: Reached target basic.target - Basic System. Jun 20 18:23:31.523747 systemd[2258]: Reached target default.target - Main User Target. Jun 20 18:23:31.523820 systemd[2258]: Startup finished in 344ms. Jun 20 18:23:31.523936 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:23:31.538081 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:23:31.692241 systemd[1]: Started sshd@1-172.31.31.140:22-139.178.68.195:38216.service - OpenSSH per-connection server daemon (139.178.68.195:38216). Jun 20 18:23:31.889985 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 38216 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:31.892460 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:31.901934 systemd-logind[1981]: New session 2 of user core. Jun 20 18:23:31.910088 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:23:32.033485 sshd[2271]: Connection closed by 139.178.68.195 port 38216 Jun 20 18:23:32.034287 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:32.040971 systemd[1]: sshd@1-172.31.31.140:22-139.178.68.195:38216.service: Deactivated successfully. Jun 20 18:23:32.045234 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:23:32.047941 systemd-logind[1981]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:23:32.050423 systemd-logind[1981]: Removed session 2. Jun 20 18:23:32.068363 systemd[1]: Started sshd@2-172.31.31.140:22-139.178.68.195:38232.service - OpenSSH per-connection server daemon (139.178.68.195:38232). Jun 20 18:23:32.277484 sshd[2277]: Accepted publickey for core from 139.178.68.195 port 38232 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:32.279918 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:32.288059 systemd-logind[1981]: New session 3 of user core. Jun 20 18:23:32.295117 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:23:32.413187 sshd[2279]: Connection closed by 139.178.68.195 port 38232 Jun 20 18:23:32.414047 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:32.419901 systemd-logind[1981]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:23:32.420681 systemd[1]: sshd@2-172.31.31.140:22-139.178.68.195:38232.service: Deactivated successfully. Jun 20 18:23:32.424460 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:23:32.429060 systemd-logind[1981]: Removed session 3. Jun 20 18:23:32.454280 systemd[1]: Started sshd@3-172.31.31.140:22-139.178.68.195:38242.service - OpenSSH per-connection server daemon (139.178.68.195:38242). Jun 20 18:23:32.649622 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 38242 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:32.652617 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:32.661929 systemd-logind[1981]: New session 4 of user core. Jun 20 18:23:32.667076 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:23:32.793965 sshd[2287]: Connection closed by 139.178.68.195 port 38242 Jun 20 18:23:32.794747 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:32.801076 systemd-logind[1981]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:23:32.802117 systemd[1]: sshd@3-172.31.31.140:22-139.178.68.195:38242.service: Deactivated successfully. Jun 20 18:23:32.805776 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:23:32.809083 systemd-logind[1981]: Removed session 4. Jun 20 18:23:32.832162 systemd[1]: Started sshd@4-172.31.31.140:22-139.178.68.195:38250.service - OpenSSH per-connection server daemon (139.178.68.195:38250). Jun 20 18:23:33.036125 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 38250 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:33.038586 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:33.046919 systemd-logind[1981]: New session 5 of user core. Jun 20 18:23:33.055065 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:23:33.173060 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:23:33.173685 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:33.193827 sudo[2296]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:33.217730 sshd[2295]: Connection closed by 139.178.68.195 port 38250 Jun 20 18:23:33.218776 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:33.226237 systemd[1]: sshd@4-172.31.31.140:22-139.178.68.195:38250.service: Deactivated successfully. Jun 20 18:23:33.229406 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:23:33.233367 systemd-logind[1981]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:23:33.236295 systemd-logind[1981]: Removed session 5. Jun 20 18:23:33.254245 systemd[1]: Started sshd@5-172.31.31.140:22-139.178.68.195:38262.service - OpenSSH per-connection server daemon (139.178.68.195:38262). Jun 20 18:23:33.448824 sshd[2302]: Accepted publickey for core from 139.178.68.195 port 38262 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:33.451381 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:33.460721 systemd-logind[1981]: New session 6 of user core. Jun 20 18:23:33.470115 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:23:33.574322 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:23:33.574972 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:33.582572 sudo[2306]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:33.592100 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:23:33.592678 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:33.610473 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:23:33.678914 augenrules[2328]: No rules Jun 20 18:23:33.681493 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:23:33.682250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:23:33.684182 sudo[2305]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:33.707458 sshd[2304]: Connection closed by 139.178.68.195 port 38262 Jun 20 18:23:33.708656 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:33.714525 systemd[1]: sshd@5-172.31.31.140:22-139.178.68.195:38262.service: Deactivated successfully. Jun 20 18:23:33.717208 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:23:33.721783 systemd-logind[1981]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:23:33.723639 systemd-logind[1981]: Removed session 6. Jun 20 18:23:33.743905 systemd[1]: Started sshd@6-172.31.31.140:22-139.178.68.195:50150.service - OpenSSH per-connection server daemon (139.178.68.195:50150). Jun 20 18:23:33.938201 sshd[2337]: Accepted publickey for core from 139.178.68.195 port 50150 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:33.939771 sshd-session[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:33.947944 systemd-logind[1981]: New session 7 of user core. Jun 20 18:23:33.958102 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:23:34.061392 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:23:34.062561 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:34.570295 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:23:34.585665 (dockerd)[2358]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:23:34.934229 dockerd[2358]: time="2025-06-20T18:23:34.932126878Z" level=info msg="Starting up" Jun 20 18:23:34.935390 dockerd[2358]: time="2025-06-20T18:23:34.935323247Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 18:23:35.079864 systemd[1]: var-lib-docker-metacopy\x2dcheck1480127628-merged.mount: Deactivated successfully. Jun 20 18:23:35.094373 dockerd[2358]: time="2025-06-20T18:23:35.094312824Z" level=info msg="Loading containers: start." Jun 20 18:23:35.107936 kernel: Initializing XFRM netlink socket Jun 20 18:23:35.415772 (udev-worker)[2381]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:35.492875 systemd-networkd[1900]: docker0: Link UP Jun 20 18:23:35.500906 dockerd[2358]: time="2025-06-20T18:23:35.500808012Z" level=info msg="Loading containers: done." Jun 20 18:23:35.523827 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4236742418-merged.mount: Deactivated successfully. Jun 20 18:23:35.534871 dockerd[2358]: time="2025-06-20T18:23:35.534740246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:23:35.535104 dockerd[2358]: time="2025-06-20T18:23:35.534917023Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 18:23:35.535164 dockerd[2358]: time="2025-06-20T18:23:35.535131198Z" level=info msg="Initializing buildkit" Jun 20 18:23:35.583301 dockerd[2358]: time="2025-06-20T18:23:35.583225193Z" level=info msg="Completed buildkit initialization" Jun 20 18:23:35.598077 dockerd[2358]: time="2025-06-20T18:23:35.597873005Z" level=info msg="Daemon has completed initialization" Jun 20 18:23:35.598422 dockerd[2358]: time="2025-06-20T18:23:35.598347795Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:23:35.598534 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:23:36.679387 containerd[2011]: time="2025-06-20T18:23:36.679333345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 18:23:37.348499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040725555.mount: Deactivated successfully. Jun 20 18:23:38.703923 containerd[2011]: time="2025-06-20T18:23:38.703867344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:38.706189 containerd[2011]: time="2025-06-20T18:23:38.706146575Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jun 20 18:23:38.708193 containerd[2011]: time="2025-06-20T18:23:38.708154819Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:38.713570 containerd[2011]: time="2025-06-20T18:23:38.713506493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:38.715515 containerd[2011]: time="2025-06-20T18:23:38.715470711Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.036075644s" Jun 20 18:23:38.715699 containerd[2011]: time="2025-06-20T18:23:38.715669230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jun 20 18:23:38.716612 containerd[2011]: time="2025-06-20T18:23:38.716392484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 18:23:40.112114 containerd[2011]: time="2025-06-20T18:23:40.112057001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:40.114328 containerd[2011]: time="2025-06-20T18:23:40.114283862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jun 20 18:23:40.116481 containerd[2011]: time="2025-06-20T18:23:40.116431591Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:40.122920 containerd[2011]: time="2025-06-20T18:23:40.122864742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:40.124674 containerd[2011]: time="2025-06-20T18:23:40.124627319Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.407684072s" Jun 20 18:23:40.124830 containerd[2011]: time="2025-06-20T18:23:40.124802390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jun 20 18:23:40.126349 containerd[2011]: time="2025-06-20T18:23:40.126298158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 18:23:40.230945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:23:40.233683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:40.579054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:40.596010 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:40.688409 kubelet[2628]: E0620 18:23:40.688326 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:40.695461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:40.695774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:40.696594 systemd[1]: kubelet.service: Consumed 321ms CPU time, 108.2M memory peak. Jun 20 18:23:41.389540 containerd[2011]: time="2025-06-20T18:23:41.389485961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:41.392218 containerd[2011]: time="2025-06-20T18:23:41.392178090Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jun 20 18:23:41.393530 containerd[2011]: time="2025-06-20T18:23:41.393490010Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:41.397884 containerd[2011]: time="2025-06-20T18:23:41.397783764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:41.399976 containerd[2011]: time="2025-06-20T18:23:41.399931889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.273577664s" Jun 20 18:23:41.400130 containerd[2011]: time="2025-06-20T18:23:41.400102867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jun 20 18:23:41.401509 containerd[2011]: time="2025-06-20T18:23:41.401470458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 18:23:42.832691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932623485.mount: Deactivated successfully. Jun 20 18:23:43.405356 containerd[2011]: time="2025-06-20T18:23:43.405299414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:43.408051 containerd[2011]: time="2025-06-20T18:23:43.407994221Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jun 20 18:23:43.410698 containerd[2011]: time="2025-06-20T18:23:43.410642972Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:43.414941 containerd[2011]: time="2025-06-20T18:23:43.414881547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:43.416131 containerd[2011]: time="2025-06-20T18:23:43.416087417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 2.014359946s" Jun 20 18:23:43.416304 containerd[2011]: time="2025-06-20T18:23:43.416272958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jun 20 18:23:43.417343 containerd[2011]: time="2025-06-20T18:23:43.417235119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:23:43.981941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416441860.mount: Deactivated successfully. Jun 20 18:23:45.217339 containerd[2011]: time="2025-06-20T18:23:45.217279896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:45.220083 containerd[2011]: time="2025-06-20T18:23:45.220000636Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jun 20 18:23:45.222799 containerd[2011]: time="2025-06-20T18:23:45.222741317Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:45.228338 containerd[2011]: time="2025-06-20T18:23:45.228274955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:45.230514 containerd[2011]: time="2025-06-20T18:23:45.230456673Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.812978793s" Jun 20 18:23:45.230622 containerd[2011]: time="2025-06-20T18:23:45.230513330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 20 18:23:45.231584 containerd[2011]: time="2025-06-20T18:23:45.231300335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:23:45.721896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231009996.mount: Deactivated successfully. Jun 20 18:23:45.735878 containerd[2011]: time="2025-06-20T18:23:45.735518337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:45.738604 containerd[2011]: time="2025-06-20T18:23:45.738563563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:23:45.740756 containerd[2011]: time="2025-06-20T18:23:45.740716671Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:45.746109 containerd[2011]: time="2025-06-20T18:23:45.746000392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:45.747236 containerd[2011]: time="2025-06-20T18:23:45.747033376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 515.681932ms" Jun 20 18:23:45.747236 containerd[2011]: time="2025-06-20T18:23:45.747088075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:23:45.748344 containerd[2011]: time="2025-06-20T18:23:45.748294883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 18:23:46.470040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754768669.mount: Deactivated successfully. Jun 20 18:23:48.429942 containerd[2011]: time="2025-06-20T18:23:48.429876482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.431757 containerd[2011]: time="2025-06-20T18:23:48.431703555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jun 20 18:23:48.432681 containerd[2011]: time="2025-06-20T18:23:48.432592227Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.437894 containerd[2011]: time="2025-06-20T18:23:48.437530607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.439860 containerd[2011]: time="2025-06-20T18:23:48.439739615Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.691390177s" Jun 20 18:23:48.439860 containerd[2011]: time="2025-06-20T18:23:48.439788948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jun 20 18:23:50.731602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:23:50.736202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:51.073151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:51.088295 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:51.168393 kubelet[2785]: E0620 18:23:51.168334 2785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:51.173241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:51.173712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:51.176018 systemd[1]: kubelet.service: Consumed 288ms CPU time, 107M memory peak. Jun 20 18:23:54.088099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:54.088467 systemd[1]: kubelet.service: Consumed 288ms CPU time, 107M memory peak. Jun 20 18:23:54.092430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:54.148285 systemd[1]: Reload requested from client PID 2799 ('systemctl') (unit session-7.scope)... Jun 20 18:23:54.148507 systemd[1]: Reloading... Jun 20 18:23:54.393895 zram_generator::config[2847]: No configuration found. Jun 20 18:23:54.591248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:54.847684 systemd[1]: Reloading finished in 698 ms. Jun 20 18:23:54.944454 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:54.949981 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:23:54.950407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:54.950482 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95M memory peak. Jun 20 18:23:54.954615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:55.286394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:55.310395 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:23:55.383222 kubelet[2909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:55.383222 kubelet[2909]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:23:55.383222 kubelet[2909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:55.383740 kubelet[2909]: I0620 18:23:55.383312 2909 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:23:57.423219 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 18:23:58.791151 kubelet[2909]: I0620 18:23:58.791102 2909 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:23:58.791757 kubelet[2909]: I0620 18:23:58.791735 2909 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:23:58.792365 kubelet[2909]: I0620 18:23:58.792340 2909 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:23:58.853652 kubelet[2909]: E0620 18:23:58.853602 2909 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:58.856322 kubelet[2909]: I0620 18:23:58.856261 2909 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:23:58.869866 kubelet[2909]: I0620 18:23:58.869770 2909 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:23:58.876321 kubelet[2909]: I0620 18:23:58.876269 2909 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:23:58.876758 kubelet[2909]: I0620 18:23:58.876707 2909 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:23:58.877085 kubelet[2909]: I0620 18:23:58.876758 2909 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:23:58.877269 kubelet[2909]: I0620 18:23:58.877122 2909 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:23:58.877269 kubelet[2909]: I0620 18:23:58.877143 2909 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:23:58.877375 kubelet[2909]: I0620 18:23:58.877357 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:23:58.883117 kubelet[2909]: I0620 18:23:58.882958 2909 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:23:58.883117 kubelet[2909]: I0620 18:23:58.883003 2909 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:23:58.885350 kubelet[2909]: I0620 18:23:58.884967 2909 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:23:58.885350 kubelet[2909]: I0620 18:23:58.885006 2909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:23:58.890601 kubelet[2909]: W0620 18:23:58.890531 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:23:58.890830 kubelet[2909]: E0620 18:23:58.890798 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:58.891165 kubelet[2909]: W0620 18:23:58.891114 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:23:58.891450 kubelet[2909]: E0620 18:23:58.891306 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:58.891718 kubelet[2909]: I0620 18:23:58.891662 2909 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:23:58.892758 kubelet[2909]: I0620 18:23:58.892732 2909 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:23:58.893011 kubelet[2909]: W0620 18:23:58.892991 2909 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:23:58.895822 kubelet[2909]: I0620 18:23:58.895779 2909 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:23:58.896311 kubelet[2909]: I0620 18:23:58.896067 2909 server.go:1287] "Started kubelet" Jun 20 18:23:58.898335 kubelet[2909]: I0620 18:23:58.898267 2909 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:23:58.899809 kubelet[2909]: I0620 18:23:58.899762 2909 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:23:58.905059 kubelet[2909]: I0620 18:23:58.904969 2909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:23:58.905646 kubelet[2909]: I0620 18:23:58.905614 2909 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:23:58.907091 kubelet[2909]: E0620 18:23:58.906706 2909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.140:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.140:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-140.184ad365b0ab569f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-140,UID:ip-172-31-31-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-140,},FirstTimestamp:2025-06-20 18:23:58.896035487 +0000 UTC m=+3.579559067,LastTimestamp:2025-06-20 18:23:58.896035487 +0000 UTC m=+3.579559067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-140,}" Jun 20 18:23:58.910522 kubelet[2909]: I0620 18:23:58.910460 2909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:23:58.912878 kubelet[2909]: I0620 18:23:58.911728 2909 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:23:58.921229 kubelet[2909]: E0620 18:23:58.921187 2909 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jun 20 18:23:58.921481 kubelet[2909]: I0620 18:23:58.921462 2909 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:23:58.922502 kubelet[2909]: I0620 18:23:58.922469 2909 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:23:58.922738 kubelet[2909]: I0620 18:23:58.922718 2909 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:23:58.923730 kubelet[2909]: W0620 18:23:58.923633 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:23:58.924034 kubelet[2909]: E0620 18:23:58.924000 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:58.928341 kubelet[2909]: E0620 18:23:58.928302 2909 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:23:58.928792 kubelet[2909]: I0620 18:23:58.928765 2909 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:23:58.928925 kubelet[2909]: I0620 18:23:58.928907 2909 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:23:58.929155 kubelet[2909]: I0620 18:23:58.929128 2909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:23:58.940611 kubelet[2909]: E0620 18:23:58.940554 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="200ms" Jun 20 18:23:58.948752 kubelet[2909]: I0620 18:23:58.948663 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:23:58.951512 kubelet[2909]: I0620 18:23:58.951442 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:23:58.951512 kubelet[2909]: I0620 18:23:58.951492 2909 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:23:58.951690 kubelet[2909]: I0620 18:23:58.951530 2909 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:23:58.951690 kubelet[2909]: I0620 18:23:58.951546 2909 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:23:58.951690 kubelet[2909]: E0620 18:23:58.951613 2909 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:23:58.961670 kubelet[2909]: W0620 18:23:58.961579 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:23:58.961951 kubelet[2909]: E0620 18:23:58.961677 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:58.977410 kubelet[2909]: I0620 18:23:58.977377 2909 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:23:58.977666 kubelet[2909]: I0620 18:23:58.977585 2909 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:23:58.977666 kubelet[2909]: I0620 18:23:58.977619 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:23:58.984046 kubelet[2909]: I0620 18:23:58.984018 2909 policy_none.go:49] "None policy: Start" Jun 20 18:23:58.984569 kubelet[2909]: I0620 18:23:58.984195 2909 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:23:58.984569 kubelet[2909]: I0620 18:23:58.984226 2909 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:23:58.997125 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:23:59.017156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:23:59.021849 kubelet[2909]: E0620 18:23:59.021780 2909 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jun 20 18:23:59.031131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:23:59.033861 kubelet[2909]: I0620 18:23:59.033784 2909 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:23:59.034155 kubelet[2909]: I0620 18:23:59.034119 2909 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:23:59.034879 kubelet[2909]: I0620 18:23:59.034151 2909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:23:59.034879 kubelet[2909]: I0620 18:23:59.034544 2909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:23:59.038325 kubelet[2909]: E0620 18:23:59.038276 2909 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:23:59.038538 kubelet[2909]: E0620 18:23:59.038386 2909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-140\" not found" Jun 20 18:23:59.072628 systemd[1]: Created slice kubepods-burstable-pod5d28297b802d516c6366acbff0f1b866.slice - libcontainer container kubepods-burstable-pod5d28297b802d516c6366acbff0f1b866.slice. Jun 20 18:23:59.100002 kubelet[2909]: E0620 18:23:59.099929 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:23:59.104076 systemd[1]: Created slice kubepods-burstable-pod85d1ce73ce72534ffb720a6626ee33d7.slice - libcontainer container kubepods-burstable-pod85d1ce73ce72534ffb720a6626ee33d7.slice. Jun 20 18:23:59.106813 kubelet[2909]: W0620 18:23:59.106740 2909 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d1ce73ce72534ffb720a6626ee33d7.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d1ce73ce72534ffb720a6626ee33d7.slice/cpuset.cpus.effective: no such device Jun 20 18:23:59.116961 kubelet[2909]: E0620 18:23:59.116574 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:23:59.123329 systemd[1]: Created slice kubepods-burstable-pod5284a26a21c4c2cb9767020614ee5328.slice - libcontainer container kubepods-burstable-pod5284a26a21c4c2cb9767020614ee5328.slice. Jun 20 18:23:59.123529 kubelet[2909]: I0620 18:23:59.123455 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-ca-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:23:59.123529 kubelet[2909]: I0620 18:23:59.123506 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:23:59.123661 kubelet[2909]: I0620 18:23:59.123544 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:23:59.123661 kubelet[2909]: I0620 18:23:59.123582 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:23:59.123661 kubelet[2909]: I0620 18:23:59.123634 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:23:59.123804 kubelet[2909]: I0620 18:23:59.123667 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:23:59.123804 kubelet[2909]: I0620 18:23:59.123707 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:23:59.123804 kubelet[2909]: I0620 18:23:59.123742 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:23:59.123804 kubelet[2909]: I0620 18:23:59.123777 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5284a26a21c4c2cb9767020614ee5328-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-140\" (UID: \"5284a26a21c4c2cb9767020614ee5328\") " pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:23:59.127858 kubelet[2909]: E0620 18:23:59.127794 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:23:59.137458 kubelet[2909]: I0620 18:23:59.137416 2909 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-140" Jun 20 18:23:59.138157 kubelet[2909]: E0620 18:23:59.138091 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jun 20 18:23:59.141577 kubelet[2909]: E0620 18:23:59.141516 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="400ms" Jun 20 18:23:59.341250 kubelet[2909]: I0620 18:23:59.341098 2909 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-140" Jun 20 18:23:59.341780 kubelet[2909]: E0620 18:23:59.341706 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jun 20 18:23:59.402053 containerd[2011]: time="2025-06-20T18:23:59.401936287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-140,Uid:5d28297b802d516c6366acbff0f1b866,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:59.420881 containerd[2011]: time="2025-06-20T18:23:59.420587247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-140,Uid:85d1ce73ce72534ffb720a6626ee33d7,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:59.430311 containerd[2011]: time="2025-06-20T18:23:59.429965194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-140,Uid:5284a26a21c4c2cb9767020614ee5328,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:59.514757 containerd[2011]: time="2025-06-20T18:23:59.514656179Z" level=info msg="connecting to shim a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700" address="unix:///run/containerd/s/4be75a68cddc0a4ceb335f31ab445742bf289b705904491b138c3ea3fcf918aa" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:59.541887 containerd[2011]: time="2025-06-20T18:23:59.541367560Z" level=info msg="connecting to shim 2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35" address="unix:///run/containerd/s/6bde50b29da8f388c324c9cb9233967e220f3877cd7d7302284a172c9555153c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:59.543080 kubelet[2909]: E0620 18:23:59.542983 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="800ms" Jun 20 18:23:59.543377 containerd[2011]: time="2025-06-20T18:23:59.543284210Z" level=info msg="connecting to shim 4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3" address="unix:///run/containerd/s/60a5779666c401207603b1da3832d1d5fb042decc3907f4452ed9bc538a4f631" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:59.611330 systemd[1]: Started cri-containerd-a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700.scope - libcontainer container a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700. Jun 20 18:23:59.649195 systemd[1]: Started cri-containerd-2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35.scope - libcontainer container 2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35. Jun 20 18:23:59.660581 systemd[1]: Started cri-containerd-4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3.scope - libcontainer container 4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3. Jun 20 18:23:59.722192 kubelet[2909]: W0620 18:23:59.722086 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:23:59.722369 kubelet[2909]: E0620 18:23:59.722203 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:59.748366 kubelet[2909]: I0620 18:23:59.748314 2909 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-140" Jun 20 18:23:59.748869 kubelet[2909]: E0620 18:23:59.748799 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jun 20 18:23:59.791864 containerd[2011]: time="2025-06-20T18:23:59.791676022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-140,Uid:5d28297b802d516c6366acbff0f1b866,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700\"" Jun 20 18:23:59.800866 containerd[2011]: time="2025-06-20T18:23:59.800489326Z" level=info msg="CreateContainer within sandbox \"a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:23:59.824446 containerd[2011]: time="2025-06-20T18:23:59.824393597Z" level=info msg="Container 2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:59.826826 containerd[2011]: time="2025-06-20T18:23:59.826780089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-140,Uid:5284a26a21c4c2cb9767020614ee5328,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35\"" Jun 20 18:23:59.832164 containerd[2011]: time="2025-06-20T18:23:59.832107211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-140,Uid:85d1ce73ce72534ffb720a6626ee33d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3\"" Jun 20 18:23:59.835575 containerd[2011]: time="2025-06-20T18:23:59.835525368Z" level=info msg="CreateContainer within sandbox \"2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:23:59.838600 containerd[2011]: time="2025-06-20T18:23:59.838550700Z" level=info msg="CreateContainer within sandbox \"4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:23:59.844511 containerd[2011]: time="2025-06-20T18:23:59.844446354Z" level=info msg="CreateContainer within sandbox \"a57b051c995130f45e6c50fd90bb3087f96a5efbf7659fd3a0b5277b32904700\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508\"" Jun 20 18:23:59.845508 containerd[2011]: time="2025-06-20T18:23:59.845449659Z" level=info msg="StartContainer for \"2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508\"" Jun 20 18:23:59.847681 containerd[2011]: time="2025-06-20T18:23:59.847537743Z" level=info msg="connecting to shim 2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508" address="unix:///run/containerd/s/4be75a68cddc0a4ceb335f31ab445742bf289b705904491b138c3ea3fcf918aa" protocol=ttrpc version=3 Jun 20 18:23:59.862527 containerd[2011]: time="2025-06-20T18:23:59.860685886Z" level=info msg="Container b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:59.866277 containerd[2011]: time="2025-06-20T18:23:59.866186867Z" level=info msg="Container 615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:59.884022 containerd[2011]: time="2025-06-20T18:23:59.883914483Z" level=info msg="CreateContainer within sandbox \"2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\"" Jun 20 18:23:59.885881 containerd[2011]: time="2025-06-20T18:23:59.885817458Z" level=info msg="StartContainer for \"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\"" Jun 20 18:23:59.886718 containerd[2011]: time="2025-06-20T18:23:59.886637203Z" level=info msg="CreateContainer within sandbox \"4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\"" Jun 20 18:23:59.889193 systemd[1]: Started cri-containerd-2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508.scope - libcontainer container 2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508. Jun 20 18:23:59.890011 containerd[2011]: time="2025-06-20T18:23:59.889965062Z" level=info msg="connecting to shim b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc" address="unix:///run/containerd/s/6bde50b29da8f388c324c9cb9233967e220f3877cd7d7302284a172c9555153c" protocol=ttrpc version=3 Jun 20 18:23:59.893002 containerd[2011]: time="2025-06-20T18:23:59.892923449Z" level=info msg="StartContainer for \"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\"" Jun 20 18:23:59.909732 containerd[2011]: time="2025-06-20T18:23:59.908510575Z" level=info msg="connecting to shim 615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece" address="unix:///run/containerd/s/60a5779666c401207603b1da3832d1d5fb042decc3907f4452ed9bc538a4f631" protocol=ttrpc version=3 Jun 20 18:23:59.951500 systemd[1]: Started cri-containerd-b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc.scope - libcontainer container b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc. Jun 20 18:23:59.980172 systemd[1]: Started cri-containerd-615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece.scope - libcontainer container 615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece. Jun 20 18:24:00.061699 containerd[2011]: time="2025-06-20T18:24:00.061528426Z" level=info msg="StartContainer for \"2ecf5e8a364f79fd9e1595a21869cffdca91df821ffccda77be2469acfb7b508\" returns successfully" Jun 20 18:24:00.121252 containerd[2011]: time="2025-06-20T18:24:00.120246459Z" level=info msg="StartContainer for \"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\" returns successfully" Jun 20 18:24:00.143420 kubelet[2909]: W0620 18:24:00.142733 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:24:00.145631 kubelet[2909]: E0620 18:24:00.144103 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:24:00.167980 containerd[2011]: time="2025-06-20T18:24:00.167910664Z" level=info msg="StartContainer for \"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\" returns successfully" Jun 20 18:24:00.173070 kubelet[2909]: W0620 18:24:00.172935 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:24:00.173322 kubelet[2909]: E0620 18:24:00.173245 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:24:00.178455 kubelet[2909]: W0620 18:24:00.178297 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jun 20 18:24:00.179655 kubelet[2909]: E0620 18:24:00.178610 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:24:00.556593 kubelet[2909]: I0620 18:24:00.555340 2909 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-140" Jun 20 18:24:01.026363 kubelet[2909]: E0620 18:24:01.025942 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:01.045916 kubelet[2909]: E0620 18:24:01.045865 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:01.068473 kubelet[2909]: E0620 18:24:01.067404 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:02.070128 kubelet[2909]: E0620 18:24:02.067856 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:02.070128 kubelet[2909]: E0620 18:24:02.067972 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:02.070128 kubelet[2909]: E0620 18:24:02.068370 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:03.072933 kubelet[2909]: E0620 18:24:03.072633 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:03.076436 kubelet[2909]: E0620 18:24:03.074470 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:04.077533 kubelet[2909]: E0620 18:24:04.077487 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:04.834582 kubelet[2909]: E0620 18:24:04.834517 2909 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jun 20 18:24:04.894756 kubelet[2909]: I0620 18:24:04.894452 2909 apiserver.go:52] "Watching apiserver" Jun 20 18:24:04.923249 kubelet[2909]: I0620 18:24:04.923194 2909 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:24:04.950826 kubelet[2909]: I0620 18:24:04.950768 2909 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-140" Jun 20 18:24:05.025002 kubelet[2909]: I0620 18:24:05.024948 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:05.058682 kubelet[2909]: E0620 18:24:05.058624 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:05.058682 kubelet[2909]: I0620 18:24:05.058673 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:05.068832 kubelet[2909]: E0620 18:24:05.068521 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:05.068832 kubelet[2909]: I0620 18:24:05.068566 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:05.078045 kubelet[2909]: E0620 18:24:05.077987 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:06.251349 kubelet[2909]: I0620 18:24:06.251030 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:07.272868 systemd[1]: Reload requested from client PID 3186 ('systemctl') (unit session-7.scope)... Jun 20 18:24:07.272894 systemd[1]: Reloading... Jun 20 18:24:07.461902 zram_generator::config[3233]: No configuration found. Jun 20 18:24:07.717527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:24:08.023181 systemd[1]: Reloading finished in 749 ms. Jun 20 18:24:08.078484 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:08.094555 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:24:08.095105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:08.095186 systemd[1]: kubelet.service: Consumed 4.335s CPU time, 128.1M memory peak. Jun 20 18:24:08.101234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:08.445908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:08.467287 (kubelet)[3290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:24:08.567716 kubelet[3290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:08.567716 kubelet[3290]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:24:08.567716 kubelet[3290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:08.568363 kubelet[3290]: I0620 18:24:08.568008 3290 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:24:08.586740 kubelet[3290]: I0620 18:24:08.586653 3290 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:24:08.586740 kubelet[3290]: I0620 18:24:08.586706 3290 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:24:08.588569 kubelet[3290]: I0620 18:24:08.588505 3290 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:24:08.596341 kubelet[3290]: I0620 18:24:08.596301 3290 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:24:08.603286 kubelet[3290]: I0620 18:24:08.603192 3290 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:24:08.616463 kubelet[3290]: I0620 18:24:08.616394 3290 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:24:08.624069 kubelet[3290]: I0620 18:24:08.624019 3290 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:24:08.625980 kubelet[3290]: I0620 18:24:08.624533 3290 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:24:08.625980 kubelet[3290]: I0620 18:24:08.624593 3290 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:24:08.625980 kubelet[3290]: I0620 18:24:08.624964 3290 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:24:08.625980 kubelet[3290]: I0620 18:24:08.624985 3290 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:24:08.626357 kubelet[3290]: I0620 18:24:08.625052 3290 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:08.626357 kubelet[3290]: I0620 18:24:08.625308 3290 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:24:08.626357 kubelet[3290]: I0620 18:24:08.625330 3290 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:24:08.626357 kubelet[3290]: I0620 18:24:08.625404 3290 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:24:08.626357 kubelet[3290]: I0620 18:24:08.625431 3290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:24:08.631076 kubelet[3290]: I0620 18:24:08.631023 3290 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:24:08.631671 sudo[3304]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:24:08.633553 sudo[3304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:24:08.634139 kubelet[3290]: I0620 18:24:08.634097 3290 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:24:08.634942 kubelet[3290]: I0620 18:24:08.634897 3290 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:24:08.635024 kubelet[3290]: I0620 18:24:08.634953 3290 server.go:1287] "Started kubelet" Jun 20 18:24:08.647346 kubelet[3290]: I0620 18:24:08.647301 3290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:24:08.664236 kubelet[3290]: I0620 18:24:08.662512 3290 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:24:08.666511 kubelet[3290]: I0620 18:24:08.666427 3290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:24:08.667110 kubelet[3290]: I0620 18:24:08.667068 3290 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:24:08.668471 kubelet[3290]: I0620 18:24:08.668435 3290 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:24:08.672600 kubelet[3290]: I0620 18:24:08.672543 3290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:24:08.687272 kubelet[3290]: I0620 18:24:08.687206 3290 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:24:08.687713 kubelet[3290]: E0620 18:24:08.687671 3290 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jun 20 18:24:08.693651 kubelet[3290]: I0620 18:24:08.693614 3290 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:24:08.695663 kubelet[3290]: I0620 18:24:08.695280 3290 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:24:08.748996 kubelet[3290]: E0620 18:24:08.748953 3290 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:24:08.752246 kubelet[3290]: I0620 18:24:08.750951 3290 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:24:08.752246 kubelet[3290]: I0620 18:24:08.751017 3290 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:24:08.752246 kubelet[3290]: I0620 18:24:08.751184 3290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:24:08.759668 kubelet[3290]: I0620 18:24:08.759609 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:24:08.765232 kubelet[3290]: I0620 18:24:08.765190 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:24:08.765930 kubelet[3290]: I0620 18:24:08.765907 3290 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:24:08.766059 kubelet[3290]: I0620 18:24:08.766038 3290 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:24:08.766149 kubelet[3290]: I0620 18:24:08.766131 3290 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:24:08.766313 kubelet[3290]: E0620 18:24:08.766284 3290 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:24:08.867777 kubelet[3290]: E0620 18:24:08.867417 3290 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:24:08.915414 kubelet[3290]: I0620 18:24:08.915267 3290 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:24:08.915881 kubelet[3290]: I0620 18:24:08.915741 3290 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:24:08.915881 kubelet[3290]: I0620 18:24:08.915782 3290 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:08.916429 kubelet[3290]: I0620 18:24:08.916322 3290 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:24:08.916564 kubelet[3290]: I0620 18:24:08.916524 3290 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:24:08.916796 kubelet[3290]: I0620 18:24:08.916776 3290 policy_none.go:49] "None policy: Start" Jun 20 18:24:08.917060 kubelet[3290]: I0620 18:24:08.916995 3290 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:24:08.917060 kubelet[3290]: I0620 18:24:08.917163 3290 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:24:08.917060 kubelet[3290]: I0620 18:24:08.917375 3290 state_mem.go:75] "Updated machine memory state" Jun 20 18:24:08.933283 kubelet[3290]: I0620 18:24:08.933245 3290 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:24:08.934438 kubelet[3290]: I0620 18:24:08.934303 3290 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:24:08.935158 kubelet[3290]: I0620 18:24:08.934779 3290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:24:08.937518 kubelet[3290]: I0620 18:24:08.937341 3290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:24:08.946168 kubelet[3290]: E0620 18:24:08.944590 3290 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:24:09.069926 kubelet[3290]: I0620 18:24:09.069582 3290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.070070 kubelet[3290]: I0620 18:24:09.069597 3290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:09.071473 kubelet[3290]: I0620 18:24:09.071017 3290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:09.084451 kubelet[3290]: E0620 18:24:09.084399 3290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-140\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.095123 kubelet[3290]: I0620 18:24:09.095073 3290 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-140" Jun 20 18:24:09.111864 kubelet[3290]: I0620 18:24:09.111735 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.113127 kubelet[3290]: I0620 18:24:09.113073 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:09.113268 kubelet[3290]: I0620 18:24:09.113140 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.113268 kubelet[3290]: I0620 18:24:09.113182 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.113268 kubelet[3290]: I0620 18:24:09.113217 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5284a26a21c4c2cb9767020614ee5328-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-140\" (UID: \"5284a26a21c4c2cb9767020614ee5328\") " pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:09.113268 kubelet[3290]: I0620 18:24:09.113251 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-ca-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:09.113482 kubelet[3290]: I0620 18:24:09.113288 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d28297b802d516c6366acbff0f1b866-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"5d28297b802d516c6366acbff0f1b866\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jun 20 18:24:09.113482 kubelet[3290]: I0620 18:24:09.113325 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.113482 kubelet[3290]: I0620 18:24:09.113362 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85d1ce73ce72534ffb720a6626ee33d7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"85d1ce73ce72534ffb720a6626ee33d7\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.121259 kubelet[3290]: I0620 18:24:09.121192 3290 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-140" Jun 20 18:24:09.121552 kubelet[3290]: I0620 18:24:09.121315 3290 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-140" Jun 20 18:24:09.589807 sudo[3304]: pam_unix(sudo:session): session closed for user root Jun 20 18:24:09.643140 kubelet[3290]: I0620 18:24:09.643060 3290 apiserver.go:52] "Watching apiserver" Jun 20 18:24:09.694057 kubelet[3290]: I0620 18:24:09.693991 3290 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:24:09.832441 kubelet[3290]: I0620 18:24:09.830778 3290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:09.832441 kubelet[3290]: I0620 18:24:09.831336 3290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.845540 kubelet[3290]: E0620 18:24:09.845286 3290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-140\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jun 20 18:24:09.849176 kubelet[3290]: E0620 18:24:09.849105 3290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-140\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jun 20 18:24:09.884116 kubelet[3290]: I0620 18:24:09.884020 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-140" podStartSLOduration=3.883999976 podStartE2EDuration="3.883999976s" podCreationTimestamp="2025-06-20 18:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:09.88352383 +0000 UTC m=+1.407325142" watchObservedRunningTime="2025-06-20 18:24:09.883999976 +0000 UTC m=+1.407801264" Jun 20 18:24:09.925118 kubelet[3290]: I0620 18:24:09.925035 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-140" podStartSLOduration=0.925011104 podStartE2EDuration="925.011104ms" podCreationTimestamp="2025-06-20 18:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:09.909396856 +0000 UTC m=+1.433198156" watchObservedRunningTime="2025-06-20 18:24:09.925011104 +0000 UTC m=+1.448812392" Jun 20 18:24:09.944918 kubelet[3290]: I0620 18:24:09.943047 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-140" podStartSLOduration=0.943022673 podStartE2EDuration="943.022673ms" podCreationTimestamp="2025-06-20 18:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:09.925515884 +0000 UTC m=+1.449317196" watchObservedRunningTime="2025-06-20 18:24:09.943022673 +0000 UTC m=+1.466823961" Jun 20 18:24:10.894167 update_engine[1983]: I20250620 18:24:10.891875 1983 update_attempter.cc:509] Updating boot flags... Jun 20 18:24:12.445476 kubelet[3290]: I0620 18:24:12.445412 3290 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:24:12.447822 containerd[2011]: time="2025-06-20T18:24:12.447713762Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:24:12.449641 kubelet[3290]: I0620 18:24:12.449486 3290 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:24:12.857717 systemd[1]: Created slice kubepods-besteffort-pod473f337e_0ac0_4086_880f_43090003e9aa.slice - libcontainer container kubepods-besteffort-pod473f337e_0ac0_4086_880f_43090003e9aa.slice. Jun 20 18:24:12.865877 kubelet[3290]: I0620 18:24:12.865187 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/473f337e-0ac0-4086-880f-43090003e9aa-kube-proxy\") pod \"kube-proxy-jfv2f\" (UID: \"473f337e-0ac0-4086-880f-43090003e9aa\") " pod="kube-system/kube-proxy-jfv2f" Jun 20 18:24:12.865877 kubelet[3290]: I0620 18:24:12.865658 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/473f337e-0ac0-4086-880f-43090003e9aa-xtables-lock\") pod \"kube-proxy-jfv2f\" (UID: \"473f337e-0ac0-4086-880f-43090003e9aa\") " pod="kube-system/kube-proxy-jfv2f" Jun 20 18:24:12.865877 kubelet[3290]: I0620 18:24:12.865705 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwd6\" (UniqueName: \"kubernetes.io/projected/473f337e-0ac0-4086-880f-43090003e9aa-kube-api-access-9cwd6\") pod \"kube-proxy-jfv2f\" (UID: \"473f337e-0ac0-4086-880f-43090003e9aa\") " pod="kube-system/kube-proxy-jfv2f" Jun 20 18:24:12.865877 kubelet[3290]: I0620 18:24:12.865759 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/473f337e-0ac0-4086-880f-43090003e9aa-lib-modules\") pod \"kube-proxy-jfv2f\" (UID: \"473f337e-0ac0-4086-880f-43090003e9aa\") " pod="kube-system/kube-proxy-jfv2f" Jun 20 18:24:12.921463 systemd[1]: Created slice kubepods-burstable-pod56902c27_56d3_4770_9350_0d79ea6c84ed.slice - libcontainer container kubepods-burstable-pod56902c27_56d3_4770_9350_0d79ea6c84ed.slice. Jun 20 18:24:12.954950 kubelet[3290]: W0620 18:24:12.954866 3290 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-31-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-140' and this object Jun 20 18:24:12.955419 kubelet[3290]: E0620 18:24:12.955239 3290 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-31-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-140' and this object" logger="UnhandledError" Jun 20 18:24:12.955419 kubelet[3290]: W0620 18:24:12.955392 3290 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-31-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-140' and this object Jun 20 18:24:12.955584 kubelet[3290]: E0620 18:24:12.955433 3290 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-31-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-140' and this object" logger="UnhandledError" Jun 20 18:24:12.956350 kubelet[3290]: W0620 18:24:12.956311 3290 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-31-140" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-140' and this object Jun 20 18:24:12.956946 kubelet[3290]: E0620 18:24:12.956567 3290 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-31-140\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-140' and this object" logger="UnhandledError" Jun 20 18:24:12.968125 kubelet[3290]: I0620 18:24:12.968073 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-etc-cni-netd\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969246 kubelet[3290]: I0620 18:24:12.968347 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56902c27-56d3-4770-9350-0d79ea6c84ed-clustermesh-secrets\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969246 kubelet[3290]: I0620 18:24:12.968424 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-lib-modules\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969246 kubelet[3290]: I0620 18:24:12.968463 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-kernel\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969246 kubelet[3290]: I0620 18:24:12.968533 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-run\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969246 kubelet[3290]: I0620 18:24:12.968573 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969617 kubelet[3290]: I0620 18:24:12.968608 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-net\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969617 kubelet[3290]: I0620 18:24:12.968673 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-bpf-maps\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969617 kubelet[3290]: I0620 18:24:12.968710 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cni-path\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969617 kubelet[3290]: I0620 18:24:12.968746 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlct\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-kube-api-access-snlct\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.969617 kubelet[3290]: I0620 18:24:12.968786 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-hostproc\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.971012 kubelet[3290]: I0620 18:24:12.968828 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-cgroup\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.971875 kubelet[3290]: I0620 18:24:12.971211 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-xtables-lock\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:12.972102 kubelet[3290]: I0620 18:24:12.972068 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-hubble-tls\") pod \"cilium-zq9wv\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " pod="kube-system/cilium-zq9wv" Jun 20 18:24:13.182119 containerd[2011]: time="2025-06-20T18:24:13.180986545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfv2f,Uid:473f337e-0ac0-4086-880f-43090003e9aa,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:13.259978 containerd[2011]: time="2025-06-20T18:24:13.257964107Z" level=info msg="connecting to shim 554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55" address="unix:///run/containerd/s/8e546e3318736ba8c5b3a76b98fa47ff68b10c9d637b0814b9efa706b899bbfb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:13.316486 sudo[2340]: pam_unix(sudo:session): session closed for user root Jun 20 18:24:13.341988 sshd[2339]: Connection closed by 139.178.68.195 port 50150 Jun 20 18:24:13.341410 sshd-session[2337]: pam_unix(sshd:session): session closed for user core Jun 20 18:24:13.363795 systemd[1]: Started cri-containerd-554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55.scope - libcontainer container 554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55. Jun 20 18:24:13.365087 systemd[1]: sshd@6-172.31.31.140:22-139.178.68.195:50150.service: Deactivated successfully. Jun 20 18:24:13.383561 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:24:13.385068 systemd[1]: session-7.scope: Consumed 9.489s CPU time, 273.5M memory peak. Jun 20 18:24:13.390122 systemd-logind[1981]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:24:13.405958 systemd-logind[1981]: Removed session 7. Jun 20 18:24:13.418935 kubelet[3290]: I0620 18:24:13.418686 3290 status_manager.go:890] "Failed to get status for pod" podUID="1d30d606-8672-41b7-a609-b5f759fdb43c" pod="kube-system/cilium-operator-6c4d7847fc-fsbjg" err="pods \"cilium-operator-6c4d7847fc-fsbjg\" is forbidden: User \"system:node:ip-172-31-31-140\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-140' and this object" Jun 20 18:24:13.431861 systemd[1]: Created slice kubepods-besteffort-pod1d30d606_8672_41b7_a609_b5f759fdb43c.slice - libcontainer container kubepods-besteffort-pod1d30d606_8672_41b7_a609_b5f759fdb43c.slice. Jun 20 18:24:13.482693 kubelet[3290]: I0620 18:24:13.482309 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdsc\" (UniqueName: \"kubernetes.io/projected/1d30d606-8672-41b7-a609-b5f759fdb43c-kube-api-access-wzdsc\") pod \"cilium-operator-6c4d7847fc-fsbjg\" (UID: \"1d30d606-8672-41b7-a609-b5f759fdb43c\") " pod="kube-system/cilium-operator-6c4d7847fc-fsbjg" Jun 20 18:24:13.482693 kubelet[3290]: I0620 18:24:13.482513 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d30d606-8672-41b7-a609-b5f759fdb43c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fsbjg\" (UID: \"1d30d606-8672-41b7-a609-b5f759fdb43c\") " pod="kube-system/cilium-operator-6c4d7847fc-fsbjg" Jun 20 18:24:13.573134 containerd[2011]: time="2025-06-20T18:24:13.572994117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfv2f,Uid:473f337e-0ac0-4086-880f-43090003e9aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55\"" Jun 20 18:24:13.580977 containerd[2011]: time="2025-06-20T18:24:13.580888975Z" level=info msg="CreateContainer within sandbox \"554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:24:13.612067 containerd[2011]: time="2025-06-20T18:24:13.612000867Z" level=info msg="Container 68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:13.635092 containerd[2011]: time="2025-06-20T18:24:13.635035759Z" level=info msg="CreateContainer within sandbox \"554c93814be4e68c859da1de1e05a89549ac29ec6a8efcafa99b44f419e2dc55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed\"" Jun 20 18:24:13.636820 containerd[2011]: time="2025-06-20T18:24:13.636694460Z" level=info msg="StartContainer for \"68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed\"" Jun 20 18:24:13.639861 containerd[2011]: time="2025-06-20T18:24:13.639798911Z" level=info msg="connecting to shim 68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed" address="unix:///run/containerd/s/8e546e3318736ba8c5b3a76b98fa47ff68b10c9d637b0814b9efa706b899bbfb" protocol=ttrpc version=3 Jun 20 18:24:13.675153 systemd[1]: Started cri-containerd-68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed.scope - libcontainer container 68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed. Jun 20 18:24:13.756383 containerd[2011]: time="2025-06-20T18:24:13.755805158Z" level=info msg="StartContainer for \"68fd13be0f13bbfe2885a7328176ae3db224906348e6fffda6ba7da6a03b49ed\" returns successfully" Jun 20 18:24:13.927729 kubelet[3290]: I0620 18:24:13.927625 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jfv2f" podStartSLOduration=1.927599037 podStartE2EDuration="1.927599037s" podCreationTimestamp="2025-06-20 18:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:13.901627586 +0000 UTC m=+5.425428898" watchObservedRunningTime="2025-06-20 18:24:13.927599037 +0000 UTC m=+5.451400325" Jun 20 18:24:14.079065 kubelet[3290]: E0620 18:24:14.078921 3290 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:24:14.079196 kubelet[3290]: E0620 18:24:14.079069 3290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path podName:56902c27-56d3-4770-9350-0d79ea6c84ed nodeName:}" failed. No retries permitted until 2025-06-20 18:24:14.579034462 +0000 UTC m=+6.102835738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path") pod "cilium-zq9wv" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed") : failed to sync configmap cache: timed out waiting for the condition Jun 20 18:24:14.089562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115078579.mount: Deactivated successfully. Jun 20 18:24:14.643208 containerd[2011]: time="2025-06-20T18:24:14.643065613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fsbjg,Uid:1d30d606-8672-41b7-a609-b5f759fdb43c,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:14.688224 containerd[2011]: time="2025-06-20T18:24:14.688070260Z" level=info msg="connecting to shim 0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7" address="unix:///run/containerd/s/28f07d98521bd5bc3801c2188c07a4bc79aec07dac54348518b00affc51ee61b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:14.736171 systemd[1]: Started cri-containerd-0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7.scope - libcontainer container 0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7. Jun 20 18:24:14.738025 containerd[2011]: time="2025-06-20T18:24:14.737788139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zq9wv,Uid:56902c27-56d3-4770-9350-0d79ea6c84ed,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:14.800910 containerd[2011]: time="2025-06-20T18:24:14.799798409Z" level=info msg="connecting to shim a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:14.854337 systemd[1]: Started cri-containerd-a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334.scope - libcontainer container a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334. Jun 20 18:24:14.860020 containerd[2011]: time="2025-06-20T18:24:14.859963333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fsbjg,Uid:1d30d606-8672-41b7-a609-b5f759fdb43c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\"" Jun 20 18:24:14.863272 containerd[2011]: time="2025-06-20T18:24:14.863108305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:24:14.915966 containerd[2011]: time="2025-06-20T18:24:14.915753919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zq9wv,Uid:56902c27-56d3-4770-9350-0d79ea6c84ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\"" Jun 20 18:24:16.691103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629221406.mount: Deactivated successfully. Jun 20 18:24:18.814041 containerd[2011]: time="2025-06-20T18:24:18.812383307Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:18.814041 containerd[2011]: time="2025-06-20T18:24:18.813954400Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 18:24:18.816885 containerd[2011]: time="2025-06-20T18:24:18.815276045Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:18.822551 containerd[2011]: time="2025-06-20T18:24:18.822436723Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.958860196s" Jun 20 18:24:18.822551 containerd[2011]: time="2025-06-20T18:24:18.822535941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 18:24:18.825435 containerd[2011]: time="2025-06-20T18:24:18.825349643Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:24:18.828708 containerd[2011]: time="2025-06-20T18:24:18.828628950Z" level=info msg="CreateContainer within sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:24:18.847916 containerd[2011]: time="2025-06-20T18:24:18.846792359Z" level=info msg="Container b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:18.864130 containerd[2011]: time="2025-06-20T18:24:18.864050912Z" level=info msg="CreateContainer within sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\"" Jun 20 18:24:18.865397 containerd[2011]: time="2025-06-20T18:24:18.865322816Z" level=info msg="StartContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\"" Jun 20 18:24:18.867307 containerd[2011]: time="2025-06-20T18:24:18.867226319Z" level=info msg="connecting to shim b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75" address="unix:///run/containerd/s/28f07d98521bd5bc3801c2188c07a4bc79aec07dac54348518b00affc51ee61b" protocol=ttrpc version=3 Jun 20 18:24:18.924330 systemd[1]: Started cri-containerd-b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75.scope - libcontainer container b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75. Jun 20 18:24:18.997643 containerd[2011]: time="2025-06-20T18:24:18.997509861Z" level=info msg="StartContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" returns successfully" Jun 20 18:24:24.872260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223022408.mount: Deactivated successfully. Jun 20 18:24:27.775366 containerd[2011]: time="2025-06-20T18:24:27.775293289Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:27.777626 containerd[2011]: time="2025-06-20T18:24:27.777544846Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 18:24:27.779869 containerd[2011]: time="2025-06-20T18:24:27.779732015Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:27.783619 containerd[2011]: time="2025-06-20T18:24:27.783349951Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.957924863s" Jun 20 18:24:27.783619 containerd[2011]: time="2025-06-20T18:24:27.783423368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 18:24:27.791396 containerd[2011]: time="2025-06-20T18:24:27.791323400Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:24:27.810444 containerd[2011]: time="2025-06-20T18:24:27.810138999Z" level=info msg="Container 9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:27.819125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160031248.mount: Deactivated successfully. Jun 20 18:24:27.835387 containerd[2011]: time="2025-06-20T18:24:27.835298305Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\"" Jun 20 18:24:27.838067 containerd[2011]: time="2025-06-20T18:24:27.837830742Z" level=info msg="StartContainer for \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\"" Jun 20 18:24:27.840256 containerd[2011]: time="2025-06-20T18:24:27.840168587Z" level=info msg="connecting to shim 9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" protocol=ttrpc version=3 Jun 20 18:24:27.887195 systemd[1]: Started cri-containerd-9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096.scope - libcontainer container 9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096. Jun 20 18:24:27.964259 containerd[2011]: time="2025-06-20T18:24:27.962629571Z" level=info msg="StartContainer for \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" returns successfully" Jun 20 18:24:27.980418 systemd[1]: cri-containerd-9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096.scope: Deactivated successfully. Jun 20 18:24:27.987427 containerd[2011]: time="2025-06-20T18:24:27.987358594Z" level=info msg="received exit event container_id:\"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" id:\"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" pid:4024 exited_at:{seconds:1750443867 nanos:986854162}" Jun 20 18:24:27.988410 containerd[2011]: time="2025-06-20T18:24:27.988354011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" id:\"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" pid:4024 exited_at:{seconds:1750443867 nanos:986854162}" Jun 20 18:24:28.033895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096-rootfs.mount: Deactivated successfully. Jun 20 18:24:28.974277 containerd[2011]: time="2025-06-20T18:24:28.974189381Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:24:29.006852 containerd[2011]: time="2025-06-20T18:24:29.006748393Z" level=info msg="Container 8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:29.020537 kubelet[3290]: I0620 18:24:29.020379 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fsbjg" podStartSLOduration=12.058267167 podStartE2EDuration="16.020352908s" podCreationTimestamp="2025-06-20 18:24:13 +0000 UTC" firstStartedPulling="2025-06-20 18:24:14.862242804 +0000 UTC m=+6.386044092" lastFinishedPulling="2025-06-20 18:24:18.824328545 +0000 UTC m=+10.348129833" observedRunningTime="2025-06-20 18:24:19.956388111 +0000 UTC m=+11.480189411" watchObservedRunningTime="2025-06-20 18:24:29.020352908 +0000 UTC m=+20.544154256" Jun 20 18:24:29.020698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982825617.mount: Deactivated successfully. Jun 20 18:24:29.030932 containerd[2011]: time="2025-06-20T18:24:29.029805256Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\"" Jun 20 18:24:29.033692 containerd[2011]: time="2025-06-20T18:24:29.031981055Z" level=info msg="StartContainer for \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\"" Jun 20 18:24:29.034571 containerd[2011]: time="2025-06-20T18:24:29.034478315Z" level=info msg="connecting to shim 8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" protocol=ttrpc version=3 Jun 20 18:24:29.085177 systemd[1]: Started cri-containerd-8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151.scope - libcontainer container 8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151. Jun 20 18:24:29.176904 containerd[2011]: time="2025-06-20T18:24:29.176225643Z" level=info msg="StartContainer for \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" returns successfully" Jun 20 18:24:29.204063 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:24:29.204653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:24:29.205597 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:24:29.210664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:24:29.217467 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:24:29.224907 systemd[1]: cri-containerd-8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151.scope: Deactivated successfully. Jun 20 18:24:29.231930 containerd[2011]: time="2025-06-20T18:24:29.231311644Z" level=info msg="received exit event container_id:\"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" id:\"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" pid:4070 exited_at:{seconds:1750443869 nanos:230946626}" Jun 20 18:24:29.231930 containerd[2011]: time="2025-06-20T18:24:29.231657921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" id:\"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" pid:4070 exited_at:{seconds:1750443869 nanos:230946626}" Jun 20 18:24:29.260573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:24:29.989075 containerd[2011]: time="2025-06-20T18:24:29.986822324Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:24:30.002143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151-rootfs.mount: Deactivated successfully. Jun 20 18:24:30.017392 containerd[2011]: time="2025-06-20T18:24:30.015991201Z" level=info msg="Container 3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:30.046356 containerd[2011]: time="2025-06-20T18:24:30.046291140Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\"" Jun 20 18:24:30.049173 containerd[2011]: time="2025-06-20T18:24:30.049101889Z" level=info msg="StartContainer for \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\"" Jun 20 18:24:30.054149 containerd[2011]: time="2025-06-20T18:24:30.053825986Z" level=info msg="connecting to shim 3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" protocol=ttrpc version=3 Jun 20 18:24:30.103254 systemd[1]: Started cri-containerd-3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431.scope - libcontainer container 3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431. Jun 20 18:24:30.202922 containerd[2011]: time="2025-06-20T18:24:30.202035158Z" level=info msg="StartContainer for \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" returns successfully" Jun 20 18:24:30.202370 systemd[1]: cri-containerd-3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431.scope: Deactivated successfully. Jun 20 18:24:30.208979 containerd[2011]: time="2025-06-20T18:24:30.208771028Z" level=info msg="received exit event container_id:\"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" id:\"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" pid:4118 exited_at:{seconds:1750443870 nanos:208100445}" Jun 20 18:24:30.210418 containerd[2011]: time="2025-06-20T18:24:30.210216611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" id:\"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" pid:4118 exited_at:{seconds:1750443870 nanos:208100445}" Jun 20 18:24:30.262260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431-rootfs.mount: Deactivated successfully. Jun 20 18:24:31.001440 containerd[2011]: time="2025-06-20T18:24:31.000755348Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:24:31.042270 containerd[2011]: time="2025-06-20T18:24:31.042164655Z" level=info msg="Container a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:31.059566 containerd[2011]: time="2025-06-20T18:24:31.059492206Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\"" Jun 20 18:24:31.062616 containerd[2011]: time="2025-06-20T18:24:31.061904968Z" level=info msg="StartContainer for \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\"" Jun 20 18:24:31.064539 containerd[2011]: time="2025-06-20T18:24:31.064140041Z" level=info msg="connecting to shim a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" protocol=ttrpc version=3 Jun 20 18:24:31.107318 systemd[1]: Started cri-containerd-a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb.scope - libcontainer container a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb. Jun 20 18:24:31.169756 systemd[1]: cri-containerd-a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb.scope: Deactivated successfully. Jun 20 18:24:31.175085 containerd[2011]: time="2025-06-20T18:24:31.174989087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" id:\"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" pid:4161 exited_at:{seconds:1750443871 nanos:172994722}" Jun 20 18:24:31.176490 containerd[2011]: time="2025-06-20T18:24:31.176335344Z" level=info msg="received exit event container_id:\"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" id:\"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" pid:4161 exited_at:{seconds:1750443871 nanos:172994722}" Jun 20 18:24:31.193277 containerd[2011]: time="2025-06-20T18:24:31.193228854Z" level=info msg="StartContainer for \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" returns successfully" Jun 20 18:24:31.224353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb-rootfs.mount: Deactivated successfully. Jun 20 18:24:32.013914 containerd[2011]: time="2025-06-20T18:24:32.013724843Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:24:32.048901 containerd[2011]: time="2025-06-20T18:24:32.046173015Z" level=info msg="Container 6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:32.076895 containerd[2011]: time="2025-06-20T18:24:32.076774448Z" level=info msg="CreateContainer within sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\"" Jun 20 18:24:32.079230 containerd[2011]: time="2025-06-20T18:24:32.079094044Z" level=info msg="StartContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\"" Jun 20 18:24:32.082055 containerd[2011]: time="2025-06-20T18:24:32.081954773Z" level=info msg="connecting to shim 6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189" address="unix:///run/containerd/s/626b63e1ecf4c3612b6958c3e5288be1881ecb9b8d6c13b10832c2655f997f0c" protocol=ttrpc version=3 Jun 20 18:24:32.121206 systemd[1]: Started cri-containerd-6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189.scope - libcontainer container 6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189. Jun 20 18:24:32.203882 containerd[2011]: time="2025-06-20T18:24:32.203698844Z" level=info msg="StartContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" returns successfully" Jun 20 18:24:32.355934 containerd[2011]: time="2025-06-20T18:24:32.355642735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" id:\"dcd87b96aa3dbb54b1422f423493a36ac60094d30311c80081b05a8386c4bf8b\" pid:4227 exited_at:{seconds:1750443872 nanos:354614361}" Jun 20 18:24:32.383981 kubelet[3290]: I0620 18:24:32.383391 3290 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:24:32.479743 systemd[1]: Created slice kubepods-burstable-pod91fb5617_15be_487c_b1d3_8a55f66621ff.slice - libcontainer container kubepods-burstable-pod91fb5617_15be_487c_b1d3_8a55f66621ff.slice. Jun 20 18:24:32.496540 systemd[1]: Created slice kubepods-burstable-pod16e1e06b_3972_49b5_b7a6_3f75cda0b278.slice - libcontainer container kubepods-burstable-pod16e1e06b_3972_49b5_b7a6_3f75cda0b278.slice. Jun 20 18:24:32.539354 kubelet[3290]: I0620 18:24:32.539221 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndb2\" (UniqueName: \"kubernetes.io/projected/16e1e06b-3972-49b5-b7a6-3f75cda0b278-kube-api-access-rndb2\") pod \"coredns-668d6bf9bc-xpg86\" (UID: \"16e1e06b-3972-49b5-b7a6-3f75cda0b278\") " pod="kube-system/coredns-668d6bf9bc-xpg86" Jun 20 18:24:32.539354 kubelet[3290]: I0620 18:24:32.539315 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e1e06b-3972-49b5-b7a6-3f75cda0b278-config-volume\") pod \"coredns-668d6bf9bc-xpg86\" (UID: \"16e1e06b-3972-49b5-b7a6-3f75cda0b278\") " pod="kube-system/coredns-668d6bf9bc-xpg86" Jun 20 18:24:32.539667 kubelet[3290]: I0620 18:24:32.539368 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91fb5617-15be-487c-b1d3-8a55f66621ff-config-volume\") pod \"coredns-668d6bf9bc-nm9rm\" (UID: \"91fb5617-15be-487c-b1d3-8a55f66621ff\") " pod="kube-system/coredns-668d6bf9bc-nm9rm" Jun 20 18:24:32.539667 kubelet[3290]: I0620 18:24:32.539408 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vltl\" (UniqueName: \"kubernetes.io/projected/91fb5617-15be-487c-b1d3-8a55f66621ff-kube-api-access-5vltl\") pod \"coredns-668d6bf9bc-nm9rm\" (UID: \"91fb5617-15be-487c-b1d3-8a55f66621ff\") " pod="kube-system/coredns-668d6bf9bc-nm9rm" Jun 20 18:24:32.790306 containerd[2011]: time="2025-06-20T18:24:32.790229479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nm9rm,Uid:91fb5617-15be-487c-b1d3-8a55f66621ff,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:32.808574 containerd[2011]: time="2025-06-20T18:24:32.808261794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpg86,Uid:16e1e06b-3972-49b5-b7a6-3f75cda0b278,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:35.338362 (udev-worker)[4287]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:24:35.338963 systemd-networkd[1900]: cilium_host: Link UP Jun 20 18:24:35.340707 systemd-networkd[1900]: cilium_net: Link UP Jun 20 18:24:35.341070 systemd-networkd[1900]: cilium_net: Gained carrier Jun 20 18:24:35.341371 systemd-networkd[1900]: cilium_host: Gained carrier Jun 20 18:24:35.343973 (udev-worker)[4289]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:24:35.522385 systemd-networkd[1900]: cilium_vxlan: Link UP Jun 20 18:24:35.522406 systemd-networkd[1900]: cilium_vxlan: Gained carrier Jun 20 18:24:35.561521 systemd-networkd[1900]: cilium_host: Gained IPv6LL Jun 20 18:24:36.030894 kernel: NET: Registered PF_ALG protocol family Jun 20 18:24:36.073513 systemd-networkd[1900]: cilium_net: Gained IPv6LL Jun 20 18:24:36.586454 systemd-networkd[1900]: cilium_vxlan: Gained IPv6LL Jun 20 18:24:37.439170 systemd-networkd[1900]: lxc_health: Link UP Jun 20 18:24:37.450087 systemd-networkd[1900]: lxc_health: Gained carrier Jun 20 18:24:37.884991 systemd-networkd[1900]: lxcebd79658bfff: Link UP Jun 20 18:24:37.891793 kernel: eth0: renamed from tmpff11b Jun 20 18:24:37.896692 (udev-worker)[4335]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:24:37.898753 (udev-worker)[4334]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:24:37.906472 systemd-networkd[1900]: lxcebd79658bfff: Gained carrier Jun 20 18:24:37.908388 systemd-networkd[1900]: lxc55fba49f0ac5: Link UP Jun 20 18:24:37.929882 kernel: eth0: renamed from tmp5467f Jun 20 18:24:37.941285 systemd-networkd[1900]: lxc55fba49f0ac5: Gained carrier Jun 20 18:24:38.791891 kubelet[3290]: I0620 18:24:38.789717 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zq9wv" podStartSLOduration=13.922578333 podStartE2EDuration="26.789581017s" podCreationTimestamp="2025-06-20 18:24:12 +0000 UTC" firstStartedPulling="2025-06-20 18:24:14.918353782 +0000 UTC m=+6.442155070" lastFinishedPulling="2025-06-20 18:24:27.785356478 +0000 UTC m=+19.309157754" observedRunningTime="2025-06-20 18:24:33.096920301 +0000 UTC m=+24.620721686" watchObservedRunningTime="2025-06-20 18:24:38.789581017 +0000 UTC m=+30.313382305" Jun 20 18:24:39.017083 systemd-networkd[1900]: lxc_health: Gained IPv6LL Jun 20 18:24:39.465274 systemd-networkd[1900]: lxc55fba49f0ac5: Gained IPv6LL Jun 20 18:24:39.467371 systemd-networkd[1900]: lxcebd79658bfff: Gained IPv6LL Jun 20 18:24:41.786227 ntpd[1970]: Listen normally on 8 cilium_host 192.168.0.51:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 8 cilium_host 192.168.0.51:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 9 cilium_net [fe80::982b:7cff:fed4:310e%4]:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 10 cilium_host [fe80::a06d:85ff:fe69:6e1c%5]:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 11 cilium_vxlan [fe80::c0c9:b6ff:fec2:f905%6]:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 12 lxc_health [fe80::c3a:dbff:fe58:83e4%8]:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 13 lxcebd79658bfff [fe80::900d:f8ff:fede:8362%10]:123 Jun 20 18:24:41.787354 ntpd[1970]: 20 Jun 18:24:41 ntpd[1970]: Listen normally on 14 lxc55fba49f0ac5 [fe80::90df:75ff:fe83:fa8c%12]:123 Jun 20 18:24:41.786413 ntpd[1970]: Listen normally on 9 cilium_net [fe80::982b:7cff:fed4:310e%4]:123 Jun 20 18:24:41.786499 ntpd[1970]: Listen normally on 10 cilium_host [fe80::a06d:85ff:fe69:6e1c%5]:123 Jun 20 18:24:41.786565 ntpd[1970]: Listen normally on 11 cilium_vxlan [fe80::c0c9:b6ff:fec2:f905%6]:123 Jun 20 18:24:41.786650 ntpd[1970]: Listen normally on 12 lxc_health [fe80::c3a:dbff:fe58:83e4%8]:123 Jun 20 18:24:41.786719 ntpd[1970]: Listen normally on 13 lxcebd79658bfff [fe80::900d:f8ff:fede:8362%10]:123 Jun 20 18:24:41.786784 ntpd[1970]: Listen normally on 14 lxc55fba49f0ac5 [fe80::90df:75ff:fe83:fa8c%12]:123 Jun 20 18:24:46.546524 containerd[2011]: time="2025-06-20T18:24:46.546145579Z" level=info msg="connecting to shim 5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3" address="unix:///run/containerd/s/c5536675d44202f06465b4df0158fa2c5a7be3e7c5cf3693a2aa85cd2eb0ccfe" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:46.552079 containerd[2011]: time="2025-06-20T18:24:46.551086973Z" level=info msg="connecting to shim ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05" address="unix:///run/containerd/s/1f340adf49cf56e750c42d7ef189302e47ce2b51cb21683d7f73f00f7ef1261e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:46.634403 systemd[1]: Started cri-containerd-5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3.scope - libcontainer container 5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3. Jun 20 18:24:46.648635 systemd[1]: Started cri-containerd-ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05.scope - libcontainer container ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05. Jun 20 18:24:46.766031 containerd[2011]: time="2025-06-20T18:24:46.765832581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpg86,Uid:16e1e06b-3972-49b5-b7a6-3f75cda0b278,Namespace:kube-system,Attempt:0,} returns sandbox id \"5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3\"" Jun 20 18:24:46.777865 containerd[2011]: time="2025-06-20T18:24:46.777559526Z" level=info msg="CreateContainer within sandbox \"5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:24:46.823993 containerd[2011]: time="2025-06-20T18:24:46.823262142Z" level=info msg="Container 2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:46.840582 containerd[2011]: time="2025-06-20T18:24:46.840506420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nm9rm,Uid:91fb5617-15be-487c-b1d3-8a55f66621ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05\"" Jun 20 18:24:46.847652 containerd[2011]: time="2025-06-20T18:24:46.847482506Z" level=info msg="CreateContainer within sandbox \"ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:24:46.854748 containerd[2011]: time="2025-06-20T18:24:46.853757634Z" level=info msg="CreateContainer within sandbox \"5467f02fb1930a5d9622e13699c28015678bc6127ac01d04f6bf47d5022aa9b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7\"" Jun 20 18:24:46.856719 containerd[2011]: time="2025-06-20T18:24:46.856529544Z" level=info msg="StartContainer for \"2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7\"" Jun 20 18:24:46.860051 containerd[2011]: time="2025-06-20T18:24:46.859982049Z" level=info msg="connecting to shim 2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7" address="unix:///run/containerd/s/c5536675d44202f06465b4df0158fa2c5a7be3e7c5cf3693a2aa85cd2eb0ccfe" protocol=ttrpc version=3 Jun 20 18:24:46.868614 containerd[2011]: time="2025-06-20T18:24:46.867606063Z" level=info msg="Container d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:46.881332 containerd[2011]: time="2025-06-20T18:24:46.881281053Z" level=info msg="CreateContainer within sandbox \"ff11b9112bec190ed84b58b1087faa72a65b23b61cadaf94b6e9c47ef8b94b05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d\"" Jun 20 18:24:46.883830 containerd[2011]: time="2025-06-20T18:24:46.883745669Z" level=info msg="StartContainer for \"d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d\"" Jun 20 18:24:46.890747 containerd[2011]: time="2025-06-20T18:24:46.890687298Z" level=info msg="connecting to shim d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d" address="unix:///run/containerd/s/1f340adf49cf56e750c42d7ef189302e47ce2b51cb21683d7f73f00f7ef1261e" protocol=ttrpc version=3 Jun 20 18:24:46.905161 systemd[1]: Started cri-containerd-2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7.scope - libcontainer container 2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7. Jun 20 18:24:46.942140 systemd[1]: Started cri-containerd-d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d.scope - libcontainer container d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d. Jun 20 18:24:47.007213 containerd[2011]: time="2025-06-20T18:24:47.006929511Z" level=info msg="StartContainer for \"2bfa598a0abd51cf59352df4ca26986f79263bd4073b17793a256dcf5980e5a7\" returns successfully" Jun 20 18:24:47.036769 containerd[2011]: time="2025-06-20T18:24:47.036354777Z" level=info msg="StartContainer for \"d4f903b0ce7ec129c57054b11c75d5c9e43cd02945290968c9cdf9ba173e063d\" returns successfully" Jun 20 18:24:47.188318 kubelet[3290]: I0620 18:24:47.186937 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xpg86" podStartSLOduration=34.18691141 podStartE2EDuration="34.18691141s" podCreationTimestamp="2025-06-20 18:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:47.184060033 +0000 UTC m=+38.707861321" watchObservedRunningTime="2025-06-20 18:24:47.18691141 +0000 UTC m=+38.710712734" Jun 20 18:24:47.188318 kubelet[3290]: I0620 18:24:47.187928 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nm9rm" podStartSLOduration=34.187906131 podStartE2EDuration="34.187906131s" podCreationTimestamp="2025-06-20 18:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:47.147703704 +0000 UTC m=+38.671505016" watchObservedRunningTime="2025-06-20 18:24:47.187906131 +0000 UTC m=+38.711707515" Jun 20 18:24:47.502886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522840081.mount: Deactivated successfully. Jun 20 18:24:50.367957 systemd[1]: Started sshd@7-172.31.31.140:22-139.178.68.195:46422.service - OpenSSH per-connection server daemon (139.178.68.195:46422). Jun 20 18:24:50.569141 sshd[4868]: Accepted publickey for core from 139.178.68.195 port 46422 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:24:50.571663 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:24:50.583934 systemd-logind[1981]: New session 8 of user core. Jun 20 18:24:50.589508 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:24:50.882916 sshd[4870]: Connection closed by 139.178.68.195 port 46422 Jun 20 18:24:50.883696 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Jun 20 18:24:50.891351 systemd[1]: sshd@7-172.31.31.140:22-139.178.68.195:46422.service: Deactivated successfully. Jun 20 18:24:50.895676 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:24:50.898136 systemd-logind[1981]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:24:50.901288 systemd-logind[1981]: Removed session 8. Jun 20 18:24:55.922944 systemd[1]: Started sshd@8-172.31.31.140:22-139.178.68.195:35414.service - OpenSSH per-connection server daemon (139.178.68.195:35414). Jun 20 18:24:56.120922 sshd[4883]: Accepted publickey for core from 139.178.68.195 port 35414 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:24:56.125548 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:24:56.133511 systemd-logind[1981]: New session 9 of user core. Jun 20 18:24:56.143174 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:24:56.388539 sshd[4885]: Connection closed by 139.178.68.195 port 35414 Jun 20 18:24:56.389421 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Jun 20 18:24:56.396728 systemd[1]: sshd@8-172.31.31.140:22-139.178.68.195:35414.service: Deactivated successfully. Jun 20 18:24:56.399889 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:24:56.402475 systemd-logind[1981]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:24:56.406271 systemd-logind[1981]: Removed session 9. Jun 20 18:25:01.428499 systemd[1]: Started sshd@9-172.31.31.140:22-139.178.68.195:35424.service - OpenSSH per-connection server daemon (139.178.68.195:35424). Jun 20 18:25:01.642147 sshd[4897]: Accepted publickey for core from 139.178.68.195 port 35424 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:01.645188 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:01.654933 systemd-logind[1981]: New session 10 of user core. Jun 20 18:25:01.662121 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:25:01.984998 sshd[4899]: Connection closed by 139.178.68.195 port 35424 Jun 20 18:25:01.985884 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:01.992647 systemd[1]: sshd@9-172.31.31.140:22-139.178.68.195:35424.service: Deactivated successfully. Jun 20 18:25:01.993192 systemd-logind[1981]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:25:01.996778 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:25:02.004084 systemd-logind[1981]: Removed session 10. Jun 20 18:25:07.024302 systemd[1]: Started sshd@10-172.31.31.140:22-139.178.68.195:34630.service - OpenSSH per-connection server daemon (139.178.68.195:34630). Jun 20 18:25:07.212742 sshd[4912]: Accepted publickey for core from 139.178.68.195 port 34630 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:07.215457 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:07.223763 systemd-logind[1981]: New session 11 of user core. Jun 20 18:25:07.237263 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:25:07.478376 sshd[4914]: Connection closed by 139.178.68.195 port 34630 Jun 20 18:25:07.479466 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:07.489433 systemd[1]: sshd@10-172.31.31.140:22-139.178.68.195:34630.service: Deactivated successfully. Jun 20 18:25:07.495523 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:25:07.497907 systemd-logind[1981]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:25:07.515071 systemd-logind[1981]: Removed session 11. Jun 20 18:25:07.518441 systemd[1]: Started sshd@11-172.31.31.140:22-139.178.68.195:34632.service - OpenSSH per-connection server daemon (139.178.68.195:34632). Jun 20 18:25:07.711441 sshd[4927]: Accepted publickey for core from 139.178.68.195 port 34632 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:07.713893 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:07.723470 systemd-logind[1981]: New session 12 of user core. Jun 20 18:25:07.734119 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:25:08.056354 sshd[4929]: Connection closed by 139.178.68.195 port 34632 Jun 20 18:25:08.057424 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:08.069818 systemd[1]: sshd@11-172.31.31.140:22-139.178.68.195:34632.service: Deactivated successfully. Jun 20 18:25:08.070231 systemd-logind[1981]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:25:08.077625 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:25:08.110934 systemd[1]: Started sshd@12-172.31.31.140:22-139.178.68.195:34642.service - OpenSSH per-connection server daemon (139.178.68.195:34642). Jun 20 18:25:08.115752 systemd-logind[1981]: Removed session 12. Jun 20 18:25:08.310584 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 34642 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:08.313297 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:08.321734 systemd-logind[1981]: New session 13 of user core. Jun 20 18:25:08.329146 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:25:08.578911 sshd[4941]: Connection closed by 139.178.68.195 port 34642 Jun 20 18:25:08.579236 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:08.587483 systemd[1]: sshd@12-172.31.31.140:22-139.178.68.195:34642.service: Deactivated successfully. Jun 20 18:25:08.592105 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:25:08.595935 systemd-logind[1981]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:25:08.598584 systemd-logind[1981]: Removed session 13. Jun 20 18:25:13.632821 systemd[1]: Started sshd@13-172.31.31.140:22-139.178.68.195:42910.service - OpenSSH per-connection server daemon (139.178.68.195:42910). Jun 20 18:25:13.847462 sshd[4955]: Accepted publickey for core from 139.178.68.195 port 42910 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:13.849983 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:13.859984 systemd-logind[1981]: New session 14 of user core. Jun 20 18:25:13.865108 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:25:14.120787 sshd[4957]: Connection closed by 139.178.68.195 port 42910 Jun 20 18:25:14.121331 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:14.130253 systemd[1]: sshd@13-172.31.31.140:22-139.178.68.195:42910.service: Deactivated successfully. Jun 20 18:25:14.135625 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:25:14.139370 systemd-logind[1981]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:25:14.142630 systemd-logind[1981]: Removed session 14. Jun 20 18:25:19.160403 systemd[1]: Started sshd@14-172.31.31.140:22-139.178.68.195:42916.service - OpenSSH per-connection server daemon (139.178.68.195:42916). Jun 20 18:25:19.352888 sshd[4971]: Accepted publickey for core from 139.178.68.195 port 42916 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:19.356097 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:19.364354 systemd-logind[1981]: New session 15 of user core. Jun 20 18:25:19.373081 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:25:19.619313 sshd[4973]: Connection closed by 139.178.68.195 port 42916 Jun 20 18:25:19.620144 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:19.628088 systemd[1]: sshd@14-172.31.31.140:22-139.178.68.195:42916.service: Deactivated successfully. Jun 20 18:25:19.631646 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:25:19.633833 systemd-logind[1981]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:25:19.638249 systemd-logind[1981]: Removed session 15. Jun 20 18:25:24.663885 systemd[1]: Started sshd@15-172.31.31.140:22-139.178.68.195:39618.service - OpenSSH per-connection server daemon (139.178.68.195:39618). Jun 20 18:25:24.865545 sshd[4985]: Accepted publickey for core from 139.178.68.195 port 39618 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:24.868086 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:24.877661 systemd-logind[1981]: New session 16 of user core. Jun 20 18:25:24.883137 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:25:25.126638 sshd[4987]: Connection closed by 139.178.68.195 port 39618 Jun 20 18:25:25.127235 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:25.134268 systemd[1]: sshd@15-172.31.31.140:22-139.178.68.195:39618.service: Deactivated successfully. Jun 20 18:25:25.138367 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:25:25.142935 systemd-logind[1981]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:25:25.145358 systemd-logind[1981]: Removed session 16. Jun 20 18:25:25.173507 systemd[1]: Started sshd@16-172.31.31.140:22-139.178.68.195:39626.service - OpenSSH per-connection server daemon (139.178.68.195:39626). Jun 20 18:25:25.370302 sshd[4999]: Accepted publickey for core from 139.178.68.195 port 39626 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:25.372811 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:25.384918 systemd-logind[1981]: New session 17 of user core. Jun 20 18:25:25.389143 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:25:25.684690 sshd[5001]: Connection closed by 139.178.68.195 port 39626 Jun 20 18:25:25.685653 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:25.692460 systemd[1]: sshd@16-172.31.31.140:22-139.178.68.195:39626.service: Deactivated successfully. Jun 20 18:25:25.692773 systemd-logind[1981]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:25:25.697074 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:25:25.702642 systemd-logind[1981]: Removed session 17. Jun 20 18:25:25.722098 systemd[1]: Started sshd@17-172.31.31.140:22-139.178.68.195:39636.service - OpenSSH per-connection server daemon (139.178.68.195:39636). Jun 20 18:25:25.922209 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 39636 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:25.924661 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:25.932953 systemd-logind[1981]: New session 18 of user core. Jun 20 18:25:25.942098 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:25:27.597520 sshd[5013]: Connection closed by 139.178.68.195 port 39636 Jun 20 18:25:27.595920 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:27.609440 systemd[1]: sshd@17-172.31.31.140:22-139.178.68.195:39636.service: Deactivated successfully. Jun 20 18:25:27.620309 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:25:27.624821 systemd-logind[1981]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:25:27.648284 systemd[1]: Started sshd@18-172.31.31.140:22-139.178.68.195:39646.service - OpenSSH per-connection server daemon (139.178.68.195:39646). Jun 20 18:25:27.652770 systemd-logind[1981]: Removed session 18. Jun 20 18:25:27.854655 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 39646 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:27.857736 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:27.867340 systemd-logind[1981]: New session 19 of user core. Jun 20 18:25:27.878137 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:25:28.373931 sshd[5032]: Connection closed by 139.178.68.195 port 39646 Jun 20 18:25:28.375079 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:28.383428 systemd-logind[1981]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:25:28.385516 systemd[1]: sshd@18-172.31.31.140:22-139.178.68.195:39646.service: Deactivated successfully. Jun 20 18:25:28.392807 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:25:28.411347 systemd-logind[1981]: Removed session 19. Jun 20 18:25:28.414306 systemd[1]: Started sshd@19-172.31.31.140:22-139.178.68.195:39662.service - OpenSSH per-connection server daemon (139.178.68.195:39662). Jun 20 18:25:28.603097 sshd[5042]: Accepted publickey for core from 139.178.68.195 port 39662 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:28.605515 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:28.613735 systemd-logind[1981]: New session 20 of user core. Jun 20 18:25:28.623113 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:25:28.865720 sshd[5044]: Connection closed by 139.178.68.195 port 39662 Jun 20 18:25:28.867129 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:28.875144 systemd[1]: sshd@19-172.31.31.140:22-139.178.68.195:39662.service: Deactivated successfully. Jun 20 18:25:28.879791 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:25:28.882937 systemd-logind[1981]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:25:28.885904 systemd-logind[1981]: Removed session 20. Jun 20 18:25:33.913321 systemd[1]: Started sshd@20-172.31.31.140:22-139.178.68.195:41720.service - OpenSSH per-connection server daemon (139.178.68.195:41720). Jun 20 18:25:34.113263 sshd[5057]: Accepted publickey for core from 139.178.68.195 port 41720 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:34.116015 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:34.123935 systemd-logind[1981]: New session 21 of user core. Jun 20 18:25:34.136127 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:25:34.377098 sshd[5059]: Connection closed by 139.178.68.195 port 41720 Jun 20 18:25:34.379117 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:34.387515 systemd[1]: sshd@20-172.31.31.140:22-139.178.68.195:41720.service: Deactivated successfully. Jun 20 18:25:34.392777 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:25:34.394933 systemd-logind[1981]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:25:34.399646 systemd-logind[1981]: Removed session 21. Jun 20 18:25:39.423513 systemd[1]: Started sshd@21-172.31.31.140:22-139.178.68.195:41734.service - OpenSSH per-connection server daemon (139.178.68.195:41734). Jun 20 18:25:39.628391 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 41734 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:39.630882 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:39.639377 systemd-logind[1981]: New session 22 of user core. Jun 20 18:25:39.647223 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:25:39.899486 sshd[5075]: Connection closed by 139.178.68.195 port 41734 Jun 20 18:25:39.900499 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:39.907778 systemd-logind[1981]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:25:39.908300 systemd[1]: sshd@21-172.31.31.140:22-139.178.68.195:41734.service: Deactivated successfully. Jun 20 18:25:39.914665 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:25:39.921803 systemd-logind[1981]: Removed session 22. Jun 20 18:25:44.949008 systemd[1]: Started sshd@22-172.31.31.140:22-139.178.68.195:59106.service - OpenSSH per-connection server daemon (139.178.68.195:59106). Jun 20 18:25:45.149900 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 59106 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:45.152422 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:45.162986 systemd-logind[1981]: New session 23 of user core. Jun 20 18:25:45.169120 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:25:45.416547 sshd[5091]: Connection closed by 139.178.68.195 port 59106 Jun 20 18:25:45.416413 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:45.423223 systemd[1]: sshd@22-172.31.31.140:22-139.178.68.195:59106.service: Deactivated successfully. Jun 20 18:25:45.426632 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:25:45.432565 systemd-logind[1981]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:25:45.437459 systemd-logind[1981]: Removed session 23. Jun 20 18:25:50.459477 systemd[1]: Started sshd@23-172.31.31.140:22-139.178.68.195:59114.service - OpenSSH per-connection server daemon (139.178.68.195:59114). Jun 20 18:25:50.662599 sshd[5103]: Accepted publickey for core from 139.178.68.195 port 59114 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:50.665107 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:50.674043 systemd-logind[1981]: New session 24 of user core. Jun 20 18:25:50.686125 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:25:50.926059 sshd[5105]: Connection closed by 139.178.68.195 port 59114 Jun 20 18:25:50.926938 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:50.934082 systemd[1]: sshd@23-172.31.31.140:22-139.178.68.195:59114.service: Deactivated successfully. Jun 20 18:25:50.939459 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:25:50.942950 systemd-logind[1981]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:25:50.945377 systemd-logind[1981]: Removed session 24. Jun 20 18:25:50.969310 systemd[1]: Started sshd@24-172.31.31.140:22-139.178.68.195:59118.service - OpenSSH per-connection server daemon (139.178.68.195:59118). Jun 20 18:25:51.173645 sshd[5117]: Accepted publickey for core from 139.178.68.195 port 59118 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:51.176150 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:51.185013 systemd-logind[1981]: New session 25 of user core. Jun 20 18:25:51.194102 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:25:54.195816 containerd[2011]: time="2025-06-20T18:25:54.195246348Z" level=info msg="StopContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" with timeout 30 (s)" Jun 20 18:25:54.197683 containerd[2011]: time="2025-06-20T18:25:54.197243486Z" level=info msg="Stop container \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" with signal terminated" Jun 20 18:25:54.225729 systemd[1]: cri-containerd-b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75.scope: Deactivated successfully. Jun 20 18:25:54.232265 containerd[2011]: time="2025-06-20T18:25:54.232089821Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:25:54.234372 containerd[2011]: time="2025-06-20T18:25:54.234097284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" id:\"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" pid:3959 exited_at:{seconds:1750443954 nanos:233344904}" Jun 20 18:25:54.234372 containerd[2011]: time="2025-06-20T18:25:54.234215195Z" level=info msg="received exit event container_id:\"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" id:\"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" pid:3959 exited_at:{seconds:1750443954 nanos:233344904}" Jun 20 18:25:54.242658 containerd[2011]: time="2025-06-20T18:25:54.242605612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" id:\"9847fc697abc7a43a8672dc3fee4197b2a779f3eeb93635c86858e09859a4d6c\" pid:5140 exited_at:{seconds:1750443954 nanos:241972932}" Jun 20 18:25:54.250806 containerd[2011]: time="2025-06-20T18:25:54.250756053Z" level=info msg="StopContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" with timeout 2 (s)" Jun 20 18:25:54.252211 containerd[2011]: time="2025-06-20T18:25:54.252010260Z" level=info msg="Stop container \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" with signal terminated" Jun 20 18:25:54.267527 systemd-networkd[1900]: lxc_health: Link DOWN Jun 20 18:25:54.267545 systemd-networkd[1900]: lxc_health: Lost carrier Jun 20 18:25:54.312527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75-rootfs.mount: Deactivated successfully. Jun 20 18:25:54.317715 systemd[1]: cri-containerd-6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189.scope: Deactivated successfully. Jun 20 18:25:54.318384 systemd[1]: cri-containerd-6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189.scope: Consumed 14.500s CPU time, 125.3M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:25:54.321668 containerd[2011]: time="2025-06-20T18:25:54.321586099Z" level=info msg="received exit event container_id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" pid:4198 exited_at:{seconds:1750443954 nanos:321320707}" Jun 20 18:25:54.322532 containerd[2011]: time="2025-06-20T18:25:54.322039662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" id:\"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" pid:4198 exited_at:{seconds:1750443954 nanos:321320707}" Jun 20 18:25:54.339446 containerd[2011]: time="2025-06-20T18:25:54.339253505Z" level=info msg="StopContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" returns successfully" Jun 20 18:25:54.340479 containerd[2011]: time="2025-06-20T18:25:54.340415493Z" level=info msg="StopPodSandbox for \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\"" Jun 20 18:25:54.340720 containerd[2011]: time="2025-06-20T18:25:54.340517040Z" level=info msg="Container to stop \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.355805 systemd[1]: cri-containerd-0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7.scope: Deactivated successfully. Jun 20 18:25:54.362100 containerd[2011]: time="2025-06-20T18:25:54.362034470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" id:\"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" pid:3871 exit_status:137 exited_at:{seconds:1750443954 nanos:361666522}" Jun 20 18:25:54.383551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189-rootfs.mount: Deactivated successfully. Jun 20 18:25:54.403495 containerd[2011]: time="2025-06-20T18:25:54.403341533Z" level=info msg="StopContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" returns successfully" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404262825Z" level=info msg="StopPodSandbox for \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\"" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404358669Z" level=info msg="Container to stop \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404383414Z" level=info msg="Container to stop \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404405025Z" level=info msg="Container to stop \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404425303Z" level=info msg="Container to stop \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.404785 containerd[2011]: time="2025-06-20T18:25:54.404445281Z" level=info msg="Container to stop \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:54.420711 systemd[1]: cri-containerd-a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334.scope: Deactivated successfully. Jun 20 18:25:54.448460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7-rootfs.mount: Deactivated successfully. Jun 20 18:25:54.456254 containerd[2011]: time="2025-06-20T18:25:54.456182845Z" level=info msg="shim disconnected" id=0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7 namespace=k8s.io Jun 20 18:25:54.456542 containerd[2011]: time="2025-06-20T18:25:54.456266227Z" level=warning msg="cleaning up after shim disconnected" id=0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7 namespace=k8s.io Jun 20 18:25:54.456542 containerd[2011]: time="2025-06-20T18:25:54.456350629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:25:54.481879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334-rootfs.mount: Deactivated successfully. Jun 20 18:25:54.491417 containerd[2011]: time="2025-06-20T18:25:54.491189220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" id:\"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" pid:3917 exit_status:137 exited_at:{seconds:1750443954 nanos:425188683}" Jun 20 18:25:54.494075 containerd[2011]: time="2025-06-20T18:25:54.494020607Z" level=info msg="received exit event sandbox_id:\"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" exit_status:137 exited_at:{seconds:1750443954 nanos:361666522}" Jun 20 18:25:54.495218 containerd[2011]: time="2025-06-20T18:25:54.495017945Z" level=info msg="TearDown network for sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" successfully" Jun 20 18:25:54.495218 containerd[2011]: time="2025-06-20T18:25:54.495065381Z" level=info msg="StopPodSandbox for \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" returns successfully" Jun 20 18:25:54.496994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7-shm.mount: Deactivated successfully. Jun 20 18:25:54.499747 containerd[2011]: time="2025-06-20T18:25:54.499653738Z" level=info msg="received exit event sandbox_id:\"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" exit_status:137 exited_at:{seconds:1750443954 nanos:425188683}" Jun 20 18:25:54.502282 containerd[2011]: time="2025-06-20T18:25:54.501971472Z" level=info msg="TearDown network for sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" successfully" Jun 20 18:25:54.504161 containerd[2011]: time="2025-06-20T18:25:54.504114147Z" level=info msg="StopPodSandbox for \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" returns successfully" Jun 20 18:25:54.506141 containerd[2011]: time="2025-06-20T18:25:54.505504562Z" level=info msg="shim disconnected" id=a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334 namespace=k8s.io Jun 20 18:25:54.506340 containerd[2011]: time="2025-06-20T18:25:54.506109664Z" level=warning msg="cleaning up after shim disconnected" id=a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334 namespace=k8s.io Jun 20 18:25:54.506340 containerd[2011]: time="2025-06-20T18:25:54.506165744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719213 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-run\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719288 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719325 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-bpf-maps\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719350 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719377 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-kernel\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.719968 kubelet[3290]: I0620 18:25:54.719415 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-cgroup\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719459 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snlct\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-kube-api-access-snlct\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719497 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdsc\" (UniqueName: \"kubernetes.io/projected/1d30d606-8672-41b7-a609-b5f759fdb43c-kube-api-access-wzdsc\") pod \"1d30d606-8672-41b7-a609-b5f759fdb43c\" (UID: \"1d30d606-8672-41b7-a609-b5f759fdb43c\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719548 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-lib-modules\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719588 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-hubble-tls\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719624 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d30d606-8672-41b7-a609-b5f759fdb43c-cilium-config-path\") pod \"1d30d606-8672-41b7-a609-b5f759fdb43c\" (UID: \"1d30d606-8672-41b7-a609-b5f759fdb43c\") " Jun 20 18:25:54.720761 kubelet[3290]: I0620 18:25:54.719664 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-etc-cni-netd\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.721108 kubelet[3290]: I0620 18:25:54.719697 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-hostproc\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.721108 kubelet[3290]: I0620 18:25:54.719735 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-xtables-lock\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.721108 kubelet[3290]: I0620 18:25:54.719776 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56902c27-56d3-4770-9350-0d79ea6c84ed-clustermesh-secrets\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.721108 kubelet[3290]: I0620 18:25:54.719809 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-net\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.722298 kubelet[3290]: I0620 18:25:54.721401 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.722298 kubelet[3290]: I0620 18:25:54.721466 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.722298 kubelet[3290]: I0620 18:25:54.721501 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.722298 kubelet[3290]: I0620 18:25:54.721536 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.722298 kubelet[3290]: I0620 18:25:54.721766 3290 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cni-path\") pod \"56902c27-56d3-4770-9350-0d79ea6c84ed\" (UID: \"56902c27-56d3-4770-9350-0d79ea6c84ed\") " Jun 20 18:25:54.723092 kubelet[3290]: I0620 18:25:54.723021 3290 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-run\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.723483 kubelet[3290]: I0620 18:25:54.723426 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.723781 kubelet[3290]: I0620 18:25:54.723495 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.723781 kubelet[3290]: I0620 18:25:54.723606 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.724297 kubelet[3290]: I0620 18:25:54.724025 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.724852 kubelet[3290]: I0620 18:25:54.724715 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:25:54.733740 kubelet[3290]: I0620 18:25:54.733663 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:54.736142 kubelet[3290]: I0620 18:25:54.736055 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:25:54.738098 kubelet[3290]: I0620 18:25:54.738008 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-kube-api-access-snlct" (OuterVolumeSpecName: "kube-api-access-snlct") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "kube-api-access-snlct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:54.738098 kubelet[3290]: I0620 18:25:54.738017 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d30d606-8672-41b7-a609-b5f759fdb43c-kube-api-access-wzdsc" (OuterVolumeSpecName: "kube-api-access-wzdsc") pod "1d30d606-8672-41b7-a609-b5f759fdb43c" (UID: "1d30d606-8672-41b7-a609-b5f759fdb43c"). InnerVolumeSpecName "kube-api-access-wzdsc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:54.738433 kubelet[3290]: I0620 18:25:54.738402 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56902c27-56d3-4770-9350-0d79ea6c84ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56902c27-56d3-4770-9350-0d79ea6c84ed" (UID: "56902c27-56d3-4770-9350-0d79ea6c84ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:25:54.741268 kubelet[3290]: I0620 18:25:54.741205 3290 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d30d606-8672-41b7-a609-b5f759fdb43c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d30d606-8672-41b7-a609-b5f759fdb43c" (UID: "1d30d606-8672-41b7-a609-b5f759fdb43c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:25:54.783333 systemd[1]: Removed slice kubepods-burstable-pod56902c27_56d3_4770_9350_0d79ea6c84ed.slice - libcontainer container kubepods-burstable-pod56902c27_56d3_4770_9350_0d79ea6c84ed.slice. Jun 20 18:25:54.783578 systemd[1]: kubepods-burstable-pod56902c27_56d3_4770_9350_0d79ea6c84ed.slice: Consumed 14.696s CPU time, 125.7M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:25:54.788139 systemd[1]: Removed slice kubepods-besteffort-pod1d30d606_8672_41b7_a609_b5f759fdb43c.slice - libcontainer container kubepods-besteffort-pod1d30d606_8672_41b7_a609_b5f759fdb43c.slice. Jun 20 18:25:54.824191 kubelet[3290]: I0620 18:25:54.824126 3290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snlct\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-kube-api-access-snlct\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824405 kubelet[3290]: I0620 18:25:54.824329 3290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzdsc\" (UniqueName: \"kubernetes.io/projected/1d30d606-8672-41b7-a609-b5f759fdb43c-kube-api-access-wzdsc\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824405 kubelet[3290]: I0620 18:25:54.824362 3290 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-lib-modules\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824384 3290 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-etc-cni-netd\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824572 3290 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56902c27-56d3-4770-9350-0d79ea6c84ed-hubble-tls\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824593 3290 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d30d606-8672-41b7-a609-b5f759fdb43c-cilium-config-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824643 3290 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56902c27-56d3-4770-9350-0d79ea6c84ed-clustermesh-secrets\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824670 3290 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-net\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.824716 kubelet[3290]: I0620 18:25:54.824690 3290 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cni-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825110 3290 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-hostproc\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825141 3290 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-xtables-lock\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825188 3290 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-config-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825214 3290 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-host-proc-sys-kernel\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825236 3290 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-bpf-maps\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:54.825308 kubelet[3290]: I0620 18:25:54.825279 3290 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56902c27-56d3-4770-9350-0d79ea6c84ed-cilium-cgroup\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jun 20 18:25:55.303965 kubelet[3290]: I0620 18:25:55.301033 3290 scope.go:117] "RemoveContainer" containerID="b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75" Jun 20 18:25:55.313601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334-shm.mount: Deactivated successfully. Jun 20 18:25:55.314367 containerd[2011]: time="2025-06-20T18:25:55.313697654Z" level=info msg="RemoveContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\"" Jun 20 18:25:55.314932 systemd[1]: var-lib-kubelet-pods-56902c27\x2d56d3\x2d4770\x2d9350\x2d0d79ea6c84ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:25:55.315114 systemd[1]: var-lib-kubelet-pods-56902c27\x2d56d3\x2d4770\x2d9350\x2d0d79ea6c84ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:25:55.315260 systemd[1]: var-lib-kubelet-pods-1d30d606\x2d8672\x2d41b7\x2da609\x2db5f759fdb43c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzdsc.mount: Deactivated successfully. Jun 20 18:25:55.315398 systemd[1]: var-lib-kubelet-pods-56902c27\x2d56d3\x2d4770\x2d9350\x2d0d79ea6c84ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnlct.mount: Deactivated successfully. Jun 20 18:25:55.336530 containerd[2011]: time="2025-06-20T18:25:55.336470779Z" level=info msg="RemoveContainer for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" returns successfully" Jun 20 18:25:55.337531 kubelet[3290]: I0620 18:25:55.337359 3290 scope.go:117] "RemoveContainer" containerID="b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75" Jun 20 18:25:55.340361 containerd[2011]: time="2025-06-20T18:25:55.340073251Z" level=error msg="ContainerStatus for \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\": not found" Jun 20 18:25:55.340753 kubelet[3290]: E0620 18:25:55.340295 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\": not found" containerID="b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75" Jun 20 18:25:55.340753 kubelet[3290]: I0620 18:25:55.340345 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75"} err="failed to get container status \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4d0f6a82a655f2ebf18398dd026870fb0ef7dc12d37c9e5a29d2366ef5fcc75\": not found" Jun 20 18:25:55.340753 kubelet[3290]: I0620 18:25:55.340461 3290 scope.go:117] "RemoveContainer" containerID="6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189" Jun 20 18:25:55.351958 containerd[2011]: time="2025-06-20T18:25:55.351721329Z" level=info msg="RemoveContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\"" Jun 20 18:25:55.364010 containerd[2011]: time="2025-06-20T18:25:55.363887153Z" level=info msg="RemoveContainer for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" returns successfully" Jun 20 18:25:55.364366 kubelet[3290]: I0620 18:25:55.364320 3290 scope.go:117] "RemoveContainer" containerID="a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb" Jun 20 18:25:55.369927 containerd[2011]: time="2025-06-20T18:25:55.369869010Z" level=info msg="RemoveContainer for \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\"" Jun 20 18:25:55.383229 containerd[2011]: time="2025-06-20T18:25:55.383168080Z" level=info msg="RemoveContainer for \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" returns successfully" Jun 20 18:25:55.384043 kubelet[3290]: I0620 18:25:55.384014 3290 scope.go:117] "RemoveContainer" containerID="3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431" Jun 20 18:25:55.390788 containerd[2011]: time="2025-06-20T18:25:55.390721763Z" level=info msg="RemoveContainer for \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\"" Jun 20 18:25:55.399779 containerd[2011]: time="2025-06-20T18:25:55.399664024Z" level=info msg="RemoveContainer for \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" returns successfully" Jun 20 18:25:55.400223 kubelet[3290]: I0620 18:25:55.400068 3290 scope.go:117] "RemoveContainer" containerID="8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151" Jun 20 18:25:55.403242 containerd[2011]: time="2025-06-20T18:25:55.403176380Z" level=info msg="RemoveContainer for \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\"" Jun 20 18:25:55.410007 containerd[2011]: time="2025-06-20T18:25:55.409939840Z" level=info msg="RemoveContainer for \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" returns successfully" Jun 20 18:25:55.410442 kubelet[3290]: I0620 18:25:55.410376 3290 scope.go:117] "RemoveContainer" containerID="9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096" Jun 20 18:25:55.413588 containerd[2011]: time="2025-06-20T18:25:55.413537918Z" level=info msg="RemoveContainer for \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\"" Jun 20 18:25:55.423559 containerd[2011]: time="2025-06-20T18:25:55.423481467Z" level=info msg="RemoveContainer for \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" returns successfully" Jun 20 18:25:55.423988 kubelet[3290]: I0620 18:25:55.423796 3290 scope.go:117] "RemoveContainer" containerID="6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189" Jun 20 18:25:55.424380 containerd[2011]: time="2025-06-20T18:25:55.424310265Z" level=error msg="ContainerStatus for \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\": not found" Jun 20 18:25:55.424650 kubelet[3290]: E0620 18:25:55.424600 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\": not found" containerID="6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189" Jun 20 18:25:55.424720 kubelet[3290]: I0620 18:25:55.424661 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189"} err="failed to get container status \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b97e59a95eec913cd08fd20c8669a08180bf310d9b3b788f060ad3e754b2189\": not found" Jun 20 18:25:55.424720 kubelet[3290]: I0620 18:25:55.424700 3290 scope.go:117] "RemoveContainer" containerID="a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb" Jun 20 18:25:55.425255 containerd[2011]: time="2025-06-20T18:25:55.425084436Z" level=error msg="ContainerStatus for \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\": not found" Jun 20 18:25:55.425656 kubelet[3290]: E0620 18:25:55.425454 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\": not found" containerID="a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb" Jun 20 18:25:55.425656 kubelet[3290]: I0620 18:25:55.425502 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb"} err="failed to get container status \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"a652b204fb0942a5af9ae5a0551ff8d0a704d3af0831c4d6ce33c47a61f7d8bb\": not found" Jun 20 18:25:55.425656 kubelet[3290]: I0620 18:25:55.425536 3290 scope.go:117] "RemoveContainer" containerID="3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431" Jun 20 18:25:55.426170 containerd[2011]: time="2025-06-20T18:25:55.426103169Z" level=error msg="ContainerStatus for \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\": not found" Jun 20 18:25:55.426470 kubelet[3290]: E0620 18:25:55.426414 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\": not found" containerID="3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431" Jun 20 18:25:55.426545 kubelet[3290]: I0620 18:25:55.426468 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431"} err="failed to get container status \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\": rpc error: code = NotFound desc = an error occurred when try to find container \"3acafddf5c66c772fdf00ebb1c5b38df96fe6fd0b6b44457106eca51646a6431\": not found" Jun 20 18:25:55.426545 kubelet[3290]: I0620 18:25:55.426524 3290 scope.go:117] "RemoveContainer" containerID="8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151" Jun 20 18:25:55.427152 containerd[2011]: time="2025-06-20T18:25:55.427095441Z" level=error msg="ContainerStatus for \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\": not found" Jun 20 18:25:55.427449 kubelet[3290]: E0620 18:25:55.427407 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\": not found" containerID="8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151" Jun 20 18:25:55.427548 kubelet[3290]: I0620 18:25:55.427479 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151"} err="failed to get container status \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c8d860ea0ab0c98e5b8456c0c74984d145251e5bf752c42f07aa337a6980151\": not found" Jun 20 18:25:55.427548 kubelet[3290]: I0620 18:25:55.427514 3290 scope.go:117] "RemoveContainer" containerID="9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096" Jun 20 18:25:55.428140 containerd[2011]: time="2025-06-20T18:25:55.428070917Z" level=error msg="ContainerStatus for \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\": not found" Jun 20 18:25:55.428468 kubelet[3290]: E0620 18:25:55.428290 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\": not found" containerID="9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096" Jun 20 18:25:55.428468 kubelet[3290]: I0620 18:25:55.428352 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096"} err="failed to get container status \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c135d768e6357c9ac01867259982207a1f11439b7add07f343eae0b16d68096\": not found" Jun 20 18:25:56.113273 sshd[5119]: Connection closed by 139.178.68.195 port 59118 Jun 20 18:25:56.114138 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:56.122113 systemd[1]: sshd@24-172.31.31.140:22-139.178.68.195:59118.service: Deactivated successfully. Jun 20 18:25:56.126434 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:25:56.127173 systemd[1]: session-25.scope: Consumed 2.243s CPU time, 25M memory peak. Jun 20 18:25:56.129268 systemd-logind[1981]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:25:56.148211 systemd[1]: Started sshd@25-172.31.31.140:22-139.178.68.195:58080.service - OpenSSH per-connection server daemon (139.178.68.195:58080). Jun 20 18:25:56.150864 systemd-logind[1981]: Removed session 25. Jun 20 18:25:56.337888 sshd[5272]: Accepted publickey for core from 139.178.68.195 port 58080 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:56.341856 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:56.350212 systemd-logind[1981]: New session 26 of user core. Jun 20 18:25:56.359108 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:25:56.772950 kubelet[3290]: I0620 18:25:56.772801 3290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d30d606-8672-41b7-a609-b5f759fdb43c" path="/var/lib/kubelet/pods/1d30d606-8672-41b7-a609-b5f759fdb43c/volumes" Jun 20 18:25:56.775003 kubelet[3290]: I0620 18:25:56.774924 3290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56902c27-56d3-4770-9350-0d79ea6c84ed" path="/var/lib/kubelet/pods/56902c27-56d3-4770-9350-0d79ea6c84ed/volumes" Jun 20 18:25:56.786080 ntpd[1970]: Deleting interface #12 lxc_health, fe80::c3a:dbff:fe58:83e4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Jun 20 18:25:56.786598 ntpd[1970]: 20 Jun 18:25:56 ntpd[1970]: Deleting interface #12 lxc_health, fe80::c3a:dbff:fe58:83e4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Jun 20 18:25:57.605370 sshd[5274]: Connection closed by 139.178.68.195 port 58080 Jun 20 18:25:57.606467 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:57.616624 systemd[1]: sshd@25-172.31.31.140:22-139.178.68.195:58080.service: Deactivated successfully. Jun 20 18:25:57.629350 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:25:57.630766 systemd[1]: session-26.scope: Consumed 1.029s CPU time, 23.7M memory peak. Jun 20 18:25:57.634374 systemd-logind[1981]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:25:57.642287 kubelet[3290]: I0620 18:25:57.641905 3290 memory_manager.go:355] "RemoveStaleState removing state" podUID="1d30d606-8672-41b7-a609-b5f759fdb43c" containerName="cilium-operator" Jun 20 18:25:57.642661 kubelet[3290]: I0620 18:25:57.642460 3290 memory_manager.go:355] "RemoveStaleState removing state" podUID="56902c27-56d3-4770-9350-0d79ea6c84ed" containerName="cilium-agent" Jun 20 18:25:57.667785 systemd-logind[1981]: Removed session 26. Jun 20 18:25:57.674398 systemd[1]: Started sshd@26-172.31.31.140:22-139.178.68.195:58082.service - OpenSSH per-connection server daemon (139.178.68.195:58082). Jun 20 18:25:57.696197 systemd[1]: Created slice kubepods-burstable-pod114e3c1c_784c_4cd6_af5b_82cf82f30031.slice - libcontainer container kubepods-burstable-pod114e3c1c_784c_4cd6_af5b_82cf82f30031.slice. Jun 20 18:25:57.842118 kubelet[3290]: I0620 18:25:57.842075 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-cilium-run\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843251 kubelet[3290]: I0620 18:25:57.842926 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-bpf-maps\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843251 kubelet[3290]: I0620 18:25:57.843031 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-lib-modules\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843251 kubelet[3290]: I0620 18:25:57.843105 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-xtables-lock\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843251 kubelet[3290]: I0620 18:25:57.843205 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/114e3c1c-784c-4cd6-af5b-82cf82f30031-cilium-config-path\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843696 kubelet[3290]: I0620 18:25:57.843543 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/114e3c1c-784c-4cd6-af5b-82cf82f30031-cilium-ipsec-secrets\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843965 kubelet[3290]: I0620 18:25:57.843646 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-host-proc-sys-kernel\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.843965 kubelet[3290]: I0620 18:25:57.843888 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/114e3c1c-784c-4cd6-af5b-82cf82f30031-hubble-tls\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.844216 kubelet[3290]: I0620 18:25:57.844159 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-host-proc-sys-net\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.844387 kubelet[3290]: I0620 18:25:57.844343 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-cni-path\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.844632 kubelet[3290]: I0620 18:25:57.844523 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqp62\" (UniqueName: \"kubernetes.io/projected/114e3c1c-784c-4cd6-af5b-82cf82f30031-kube-api-access-mqp62\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.844751 kubelet[3290]: I0620 18:25:57.844729 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-cilium-cgroup\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.844940 kubelet[3290]: I0620 18:25:57.844816 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-hostproc\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.845125 kubelet[3290]: I0620 18:25:57.845049 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/114e3c1c-784c-4cd6-af5b-82cf82f30031-clustermesh-secrets\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.845494 kubelet[3290]: I0620 18:25:57.845352 3290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/114e3c1c-784c-4cd6-af5b-82cf82f30031-etc-cni-netd\") pod \"cilium-gsrqh\" (UID: \"114e3c1c-784c-4cd6-af5b-82cf82f30031\") " pod="kube-system/cilium-gsrqh" Jun 20 18:25:57.911606 sshd[5286]: Accepted publickey for core from 139.178.68.195 port 58082 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:57.913526 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:57.922937 systemd-logind[1981]: New session 27 of user core. Jun 20 18:25:57.929136 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:25:58.015092 containerd[2011]: time="2025-06-20T18:25:58.014986958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsrqh,Uid:114e3c1c-784c-4cd6-af5b-82cf82f30031,Namespace:kube-system,Attempt:0,}" Jun 20 18:25:58.050875 sshd[5288]: Connection closed by 139.178.68.195 port 58082 Jun 20 18:25:58.051730 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:58.057189 containerd[2011]: time="2025-06-20T18:25:58.057107464Z" level=info msg="connecting to shim 50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:58.063359 systemd[1]: sshd@26-172.31.31.140:22-139.178.68.195:58082.service: Deactivated successfully. Jun 20 18:25:58.070323 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:25:58.078964 systemd-logind[1981]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:25:58.114377 systemd-logind[1981]: Removed session 27. Jun 20 18:25:58.126178 systemd[1]: Started cri-containerd-50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b.scope - libcontainer container 50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b. Jun 20 18:25:58.129329 systemd[1]: Started sshd@27-172.31.31.140:22-139.178.68.195:58088.service - OpenSSH per-connection server daemon (139.178.68.195:58088). Jun 20 18:25:58.184517 containerd[2011]: time="2025-06-20T18:25:58.184229371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsrqh,Uid:114e3c1c-784c-4cd6-af5b-82cf82f30031,Namespace:kube-system,Attempt:0,} returns sandbox id \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\"" Jun 20 18:25:58.195117 containerd[2011]: time="2025-06-20T18:25:58.194304554Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:25:58.211336 containerd[2011]: time="2025-06-20T18:25:58.211279129Z" level=info msg="Container c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:58.225149 containerd[2011]: time="2025-06-20T18:25:58.225092212Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\"" Jun 20 18:25:58.227043 containerd[2011]: time="2025-06-20T18:25:58.226979303Z" level=info msg="StartContainer for \"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\"" Jun 20 18:25:58.229469 containerd[2011]: time="2025-06-20T18:25:58.229404143Z" level=info msg="connecting to shim c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" protocol=ttrpc version=3 Jun 20 18:25:58.263355 systemd[1]: Started cri-containerd-c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238.scope - libcontainer container c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238. Jun 20 18:25:58.324531 containerd[2011]: time="2025-06-20T18:25:58.324129094Z" level=info msg="StartContainer for \"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\" returns successfully" Jun 20 18:25:58.336665 systemd[1]: cri-containerd-c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238.scope: Deactivated successfully. Jun 20 18:25:58.342502 containerd[2011]: time="2025-06-20T18:25:58.342447117Z" level=info msg="received exit event container_id:\"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\" id:\"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\" pid:5357 exited_at:{seconds:1750443958 nanos:341733336}" Jun 20 18:25:58.343337 containerd[2011]: time="2025-06-20T18:25:58.342955367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\" id:\"c3f872a58d4601473e4da975cb070ce1b48792b060115310367aeb9b4225f238\" pid:5357 exited_at:{seconds:1750443958 nanos:341733336}" Jun 20 18:25:58.356703 sshd[5330]: Accepted publickey for core from 139.178.68.195 port 58088 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:58.363695 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:58.382896 systemd-logind[1981]: New session 28 of user core. Jun 20 18:25:58.387142 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:25:58.984884 kubelet[3290]: E0620 18:25:58.984796 3290 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:25:59.366978 containerd[2011]: time="2025-06-20T18:25:59.366255160Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:25:59.392152 containerd[2011]: time="2025-06-20T18:25:59.392044012Z" level=info msg="Container 5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:59.414513 containerd[2011]: time="2025-06-20T18:25:59.414425550Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\"" Jun 20 18:25:59.416631 containerd[2011]: time="2025-06-20T18:25:59.416502684Z" level=info msg="StartContainer for \"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\"" Jun 20 18:25:59.420424 containerd[2011]: time="2025-06-20T18:25:59.420355805Z" level=info msg="connecting to shim 5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" protocol=ttrpc version=3 Jun 20 18:25:59.461157 systemd[1]: Started cri-containerd-5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed.scope - libcontainer container 5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed. Jun 20 18:25:59.534856 containerd[2011]: time="2025-06-20T18:25:59.534685198Z" level=info msg="StartContainer for \"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\" returns successfully" Jun 20 18:25:59.553182 systemd[1]: cri-containerd-5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed.scope: Deactivated successfully. Jun 20 18:25:59.556513 containerd[2011]: time="2025-06-20T18:25:59.556379236Z" level=info msg="received exit event container_id:\"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\" id:\"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\" pid:5407 exited_at:{seconds:1750443959 nanos:556082508}" Jun 20 18:25:59.557682 containerd[2011]: time="2025-06-20T18:25:59.556723196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\" id:\"5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed\" pid:5407 exited_at:{seconds:1750443959 nanos:556082508}" Jun 20 18:25:59.618151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a8f4ee4b40956e958ea889630e83e27853dbace8e7a63eba1ef5f9235cba5ed-rootfs.mount: Deactivated successfully. Jun 20 18:26:00.373236 containerd[2011]: time="2025-06-20T18:26:00.372339898Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:26:00.401221 containerd[2011]: time="2025-06-20T18:26:00.401139147Z" level=info msg="Container 9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:00.424917 containerd[2011]: time="2025-06-20T18:26:00.424864876Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\"" Jun 20 18:26:00.426330 containerd[2011]: time="2025-06-20T18:26:00.426252914Z" level=info msg="StartContainer for \"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\"" Jun 20 18:26:00.429267 containerd[2011]: time="2025-06-20T18:26:00.429208671Z" level=info msg="connecting to shim 9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" protocol=ttrpc version=3 Jun 20 18:26:00.469216 systemd[1]: Started cri-containerd-9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867.scope - libcontainer container 9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867. Jun 20 18:26:00.550686 systemd[1]: cri-containerd-9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867.scope: Deactivated successfully. Jun 20 18:26:00.558655 containerd[2011]: time="2025-06-20T18:26:00.558586805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\" id:\"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\" pid:5453 exited_at:{seconds:1750443960 nanos:558180594}" Jun 20 18:26:00.559230 containerd[2011]: time="2025-06-20T18:26:00.559072183Z" level=info msg="received exit event container_id:\"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\" id:\"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\" pid:5453 exited_at:{seconds:1750443960 nanos:558180594}" Jun 20 18:26:00.560177 containerd[2011]: time="2025-06-20T18:26:00.560095971Z" level=info msg="StartContainer for \"9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867\" returns successfully" Jun 20 18:26:00.602900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da51eac78bb08288e554c9ff555862f8aae4b27b015fb3ece1cfd2e85ea5867-rootfs.mount: Deactivated successfully. Jun 20 18:26:01.332705 kubelet[3290]: I0620 18:26:01.332626 3290 setters.go:602] "Node became not ready" node="ip-172-31-31-140" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:26:01Z","lastTransitionTime":"2025-06-20T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:26:01.382298 containerd[2011]: time="2025-06-20T18:26:01.382170664Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:26:01.409806 containerd[2011]: time="2025-06-20T18:26:01.408761709Z" level=info msg="Container 61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:01.427981 containerd[2011]: time="2025-06-20T18:26:01.427923394Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\"" Jun 20 18:26:01.430077 containerd[2011]: time="2025-06-20T18:26:01.430015643Z" level=info msg="StartContainer for \"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\"" Jun 20 18:26:01.433084 containerd[2011]: time="2025-06-20T18:26:01.433007983Z" level=info msg="connecting to shim 61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" protocol=ttrpc version=3 Jun 20 18:26:01.481160 systemd[1]: Started cri-containerd-61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f.scope - libcontainer container 61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f. Jun 20 18:26:01.531803 systemd[1]: cri-containerd-61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f.scope: Deactivated successfully. Jun 20 18:26:01.538074 containerd[2011]: time="2025-06-20T18:26:01.538012243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\" id:\"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\" pid:5494 exited_at:{seconds:1750443961 nanos:537272842}" Jun 20 18:26:01.542488 containerd[2011]: time="2025-06-20T18:26:01.542313777Z" level=info msg="received exit event container_id:\"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\" id:\"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\" pid:5494 exited_at:{seconds:1750443961 nanos:537272842}" Jun 20 18:26:01.559938 containerd[2011]: time="2025-06-20T18:26:01.559884474Z" level=info msg="StartContainer for \"61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f\" returns successfully" Jun 20 18:26:01.585639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61734b2574470383f332be5f9ff69b4ef4dd3fe3f373938783c8acb25a16078f-rootfs.mount: Deactivated successfully. Jun 20 18:26:02.407608 containerd[2011]: time="2025-06-20T18:26:02.407019952Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:26:02.429138 containerd[2011]: time="2025-06-20T18:26:02.429082202Z" level=info msg="Container 9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:02.441094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875130602.mount: Deactivated successfully. Jun 20 18:26:02.456465 containerd[2011]: time="2025-06-20T18:26:02.456385011Z" level=info msg="CreateContainer within sandbox \"50306a1c8fc67fbb15b3f365ad0a93a50d831b9e6513d1c83bdfc0110f2c568b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\"" Jun 20 18:26:02.457289 containerd[2011]: time="2025-06-20T18:26:02.457141209Z" level=info msg="StartContainer for \"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\"" Jun 20 18:26:02.459593 containerd[2011]: time="2025-06-20T18:26:02.459520570Z" level=info msg="connecting to shim 9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f" address="unix:///run/containerd/s/185787d67945897bce560a2aea0e4d231655df227a51d84e58bb793c5579edae" protocol=ttrpc version=3 Jun 20 18:26:02.504271 systemd[1]: Started cri-containerd-9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f.scope - libcontainer container 9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f. Jun 20 18:26:02.575226 containerd[2011]: time="2025-06-20T18:26:02.575060480Z" level=info msg="StartContainer for \"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" returns successfully" Jun 20 18:26:02.714130 containerd[2011]: time="2025-06-20T18:26:02.713975363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"47a0d9f36eee7dca7a8545c225023fb094533cbc5b9f3ce3a313dc8ed02fb1c6\" pid:5560 exited_at:{seconds:1750443962 nanos:711472148}" Jun 20 18:26:02.768081 kubelet[3290]: E0620 18:26:02.767164 3290 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-nm9rm" podUID="91fb5617-15be-487c-b1d3-8a55f66621ff" Jun 20 18:26:03.415329 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 18:26:03.463515 kubelet[3290]: I0620 18:26:03.463391 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gsrqh" podStartSLOduration=6.463361206 podStartE2EDuration="6.463361206s" podCreationTimestamp="2025-06-20 18:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:26:03.460523096 +0000 UTC m=+114.984324408" watchObservedRunningTime="2025-06-20 18:26:03.463361206 +0000 UTC m=+114.987162590" Jun 20 18:26:04.951424 containerd[2011]: time="2025-06-20T18:26:04.951352404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"eb05c5f76e8cf15047b8fa7262e1164a97ae7ad5bd95d77e4fa6d286cfd48cc2\" pid:5642 exit_status:1 exited_at:{seconds:1750443964 nanos:950478067}" Jun 20 18:26:07.187547 containerd[2011]: time="2025-06-20T18:26:07.186726208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"054e80694fba89d7e13b3d330108f047cedae8b486f95d9b31924224c8396970\" pid:5991 exit_status:1 exited_at:{seconds:1750443967 nanos:185580391}" Jun 20 18:26:07.567226 (udev-worker)[6071]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:26:07.571320 systemd-networkd[1900]: lxc_health: Link UP Jun 20 18:26:07.579570 (udev-worker)[6072]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:26:07.585547 systemd-networkd[1900]: lxc_health: Gained carrier Jun 20 18:26:08.754280 containerd[2011]: time="2025-06-20T18:26:08.754185973Z" level=info msg="StopPodSandbox for \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\"" Jun 20 18:26:08.756045 containerd[2011]: time="2025-06-20T18:26:08.755059410Z" level=info msg="TearDown network for sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" successfully" Jun 20 18:26:08.756045 containerd[2011]: time="2025-06-20T18:26:08.755100062Z" level=info msg="StopPodSandbox for \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" returns successfully" Jun 20 18:26:08.756960 containerd[2011]: time="2025-06-20T18:26:08.756462179Z" level=info msg="RemovePodSandbox for \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\"" Jun 20 18:26:08.756960 containerd[2011]: time="2025-06-20T18:26:08.756517815Z" level=info msg="Forcibly stopping sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\"" Jun 20 18:26:08.756960 containerd[2011]: time="2025-06-20T18:26:08.756658981Z" level=info msg="TearDown network for sandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" successfully" Jun 20 18:26:08.760892 containerd[2011]: time="2025-06-20T18:26:08.760795036Z" level=info msg="Ensure that sandbox a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334 in task-service has been cleanup successfully" Jun 20 18:26:08.772968 containerd[2011]: time="2025-06-20T18:26:08.772791576Z" level=info msg="RemovePodSandbox \"a25dec4a7ae53d1b292825f3bcc63023a51148e27c0e0b705e1c5718b53d1334\" returns successfully" Jun 20 18:26:08.775872 containerd[2011]: time="2025-06-20T18:26:08.774086170Z" level=info msg="StopPodSandbox for \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\"" Jun 20 18:26:08.775872 containerd[2011]: time="2025-06-20T18:26:08.774288099Z" level=info msg="TearDown network for sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" successfully" Jun 20 18:26:08.775872 containerd[2011]: time="2025-06-20T18:26:08.774314885Z" level=info msg="StopPodSandbox for \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" returns successfully" Jun 20 18:26:08.778194 containerd[2011]: time="2025-06-20T18:26:08.778131700Z" level=info msg="RemovePodSandbox for \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\"" Jun 20 18:26:08.778374 containerd[2011]: time="2025-06-20T18:26:08.778194468Z" level=info msg="Forcibly stopping sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\"" Jun 20 18:26:08.778450 containerd[2011]: time="2025-06-20T18:26:08.778362023Z" level=info msg="TearDown network for sandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" successfully" Jun 20 18:26:08.786671 containerd[2011]: time="2025-06-20T18:26:08.786594430Z" level=info msg="Ensure that sandbox 0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7 in task-service has been cleanup successfully" Jun 20 18:26:08.795639 containerd[2011]: time="2025-06-20T18:26:08.795504118Z" level=info msg="RemovePodSandbox \"0e4cb99bbc08862e332e1ee024a0ba8f5fcc97b5bbf383b26e76aaae583ed2b7\" returns successfully" Jun 20 18:26:09.485898 containerd[2011]: time="2025-06-20T18:26:09.485634180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"543d2c33e20d4316562a54e1e47c6b26dca7c89c8b9f9add839fb77f334f5d5a\" pid:6106 exited_at:{seconds:1750443969 nanos:484435609}" Jun 20 18:26:09.514084 systemd-networkd[1900]: lxc_health: Gained IPv6LL Jun 20 18:26:11.786150 ntpd[1970]: Listen normally on 15 lxc_health [fe80::90eb:25ff:fe20:8fb7%14]:123 Jun 20 18:26:11.786700 ntpd[1970]: 20 Jun 18:26:11 ntpd[1970]: Listen normally on 15 lxc_health [fe80::90eb:25ff:fe20:8fb7%14]:123 Jun 20 18:26:11.852266 containerd[2011]: time="2025-06-20T18:26:11.852191303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"4e68f009add4f5cdd3348898a529aa871637d500cf54cf8208c45edf7163f45b\" pid:6138 exited_at:{seconds:1750443971 nanos:851659161}" Jun 20 18:26:14.104428 containerd[2011]: time="2025-06-20T18:26:14.104365703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d71c828f4e22e82a1f4bd495ba95a1e68083a907217168306b4be1db3896a8f\" id:\"39649a2810ce6fadc0b74b8dfcebdaaac4a67f4b335a6098939721170421da24\" pid:6166 exited_at:{seconds:1750443974 nanos:99653516}" Jun 20 18:26:14.139638 sshd[5389]: Connection closed by 139.178.68.195 port 58088 Jun 20 18:26:14.141629 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:14.151498 systemd[1]: sshd@27-172.31.31.140:22-139.178.68.195:58088.service: Deactivated successfully. Jun 20 18:26:14.155479 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:26:14.158739 systemd-logind[1981]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:26:14.164692 systemd-logind[1981]: Removed session 28. Jun 20 18:26:27.976618 systemd[1]: cri-containerd-615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece.scope: Deactivated successfully. Jun 20 18:26:27.979041 systemd[1]: cri-containerd-615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece.scope: Consumed 5.827s CPU time, 53.5M memory peak. Jun 20 18:26:27.982741 containerd[2011]: time="2025-06-20T18:26:27.982566577Z" level=info msg="received exit event container_id:\"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\" id:\"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\" pid:3141 exit_status:1 exited_at:{seconds:1750443987 nanos:981965172}" Jun 20 18:26:27.985260 containerd[2011]: time="2025-06-20T18:26:27.984980839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\" id:\"615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece\" pid:3141 exit_status:1 exited_at:{seconds:1750443987 nanos:981965172}" Jun 20 18:26:28.026777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece-rootfs.mount: Deactivated successfully. Jun 20 18:26:28.494822 kubelet[3290]: I0620 18:26:28.494440 3290 scope.go:117] "RemoveContainer" containerID="615451f6ec5a0ef0fd276094268ebb5d3cae08965e05065b6ec6844b14c41ece" Jun 20 18:26:28.497800 containerd[2011]: time="2025-06-20T18:26:28.497744280Z" level=info msg="CreateContainer within sandbox \"4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 18:26:28.512114 containerd[2011]: time="2025-06-20T18:26:28.512050558Z" level=info msg="Container b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:28.534262 containerd[2011]: time="2025-06-20T18:26:28.534154673Z" level=info msg="CreateContainer within sandbox \"4a7004dcb05ca44ebe68886c1886e1f823fb646efb09f4eedd15bb4f5db01cd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d\"" Jun 20 18:26:28.535542 containerd[2011]: time="2025-06-20T18:26:28.535421066Z" level=info msg="StartContainer for \"b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d\"" Jun 20 18:26:28.538818 containerd[2011]: time="2025-06-20T18:26:28.538745719Z" level=info msg="connecting to shim b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d" address="unix:///run/containerd/s/60a5779666c401207603b1da3832d1d5fb042decc3907f4452ed9bc538a4f631" protocol=ttrpc version=3 Jun 20 18:26:28.577170 systemd[1]: Started cri-containerd-b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d.scope - libcontainer container b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d. Jun 20 18:26:28.665524 containerd[2011]: time="2025-06-20T18:26:28.665458209Z" level=info msg="StartContainer for \"b23b412ea07be5e3bb1e601e2e32bedc5fbddf73f2fe7dd184186a5580525b8d\" returns successfully" Jun 20 18:26:31.867307 kubelet[3290]: E0620 18:26:31.867044 3290 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jun 20 18:26:32.567992 systemd[1]: cri-containerd-b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc.scope: Deactivated successfully. Jun 20 18:26:32.569509 systemd[1]: cri-containerd-b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc.scope: Consumed 3.817s CPU time, 20.3M memory peak. Jun 20 18:26:32.573197 containerd[2011]: time="2025-06-20T18:26:32.573125337Z" level=info msg="received exit event container_id:\"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\" id:\"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\" pid:3134 exit_status:1 exited_at:{seconds:1750443992 nanos:572597109}" Jun 20 18:26:32.574112 containerd[2011]: time="2025-06-20T18:26:32.573549929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\" id:\"b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc\" pid:3134 exit_status:1 exited_at:{seconds:1750443992 nanos:572597109}" Jun 20 18:26:32.613363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc-rootfs.mount: Deactivated successfully. Jun 20 18:26:33.523975 kubelet[3290]: I0620 18:26:33.523746 3290 scope.go:117] "RemoveContainer" containerID="b3313d458fc7af3817bbb91a71323358ab76aed891e32e7584e3ee6a19e793bc" Jun 20 18:26:33.527604 containerd[2011]: time="2025-06-20T18:26:33.527541485Z" level=info msg="CreateContainer within sandbox \"2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 18:26:33.544525 containerd[2011]: time="2025-06-20T18:26:33.542798878Z" level=info msg="Container 5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:33.561148 containerd[2011]: time="2025-06-20T18:26:33.561073847Z" level=info msg="CreateContainer within sandbox \"2ba3dd11793e1d1f16b82be37612be847474c93c2633613850c7da88a8844b35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3\"" Jun 20 18:26:33.562429 containerd[2011]: time="2025-06-20T18:26:33.562361130Z" level=info msg="StartContainer for \"5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3\"" Jun 20 18:26:33.565381 containerd[2011]: time="2025-06-20T18:26:33.565264265Z" level=info msg="connecting to shim 5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3" address="unix:///run/containerd/s/6bde50b29da8f388c324c9cb9233967e220f3877cd7d7302284a172c9555153c" protocol=ttrpc version=3 Jun 20 18:26:33.604150 systemd[1]: Started cri-containerd-5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3.scope - libcontainer container 5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3. Jun 20 18:26:33.683863 containerd[2011]: time="2025-06-20T18:26:33.683777929Z" level=info msg="StartContainer for \"5f04bb2de3c5274d182580b0fb56f8effc9413d3eefbcbec014c2eafba9550c3\" returns successfully" Jun 20 18:26:41.868006 kubelet[3290]: E0620 18:26:41.867693 3290 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"