Jul 10 23:35:03.236495 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 10 23:35:03.236539 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Jul 10 22:12:17 -00 2025 Jul 10 23:35:03.236583 kernel: KASLR disabled due to lack of seed Jul 10 23:35:03.236601 kernel: efi: EFI v2.7 by EDK II Jul 10 23:35:03.236617 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 10 23:35:03.236633 kernel: secureboot: Secure boot disabled Jul 10 23:35:03.236651 kernel: ACPI: Early table checksum verification disabled Jul 10 23:35:03.236666 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 10 23:35:03.236682 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 10 23:35:03.236698 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 10 23:35:03.236719 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 10 23:35:03.236735 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 10 23:35:03.236751 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 10 23:35:03.236767 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 10 23:35:03.236785 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 10 23:35:03.236805 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 10 23:35:03.236822 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 10 23:35:03.236839 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 10 23:35:03.236855 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 10 23:35:03.236871 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 10 23:35:03.236887 kernel: printk: bootconsole [uart0] enabled Jul 10 23:35:03.236903 kernel: NUMA: Failed to initialise from firmware Jul 10 23:35:03.236920 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 10 23:35:03.236936 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 10 23:35:03.236952 kernel: Zone ranges: Jul 10 23:35:03.236969 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 10 23:35:03.236989 kernel: DMA32 empty Jul 10 23:35:03.237006 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 10 23:35:03.237023 kernel: Movable zone start for each node Jul 10 23:35:03.237038 kernel: Early memory node ranges Jul 10 23:35:03.237055 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 10 23:35:03.237071 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 10 23:35:03.237088 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 10 23:35:03.237104 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 10 23:35:03.237121 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 10 23:35:03.237137 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 10 23:35:03.237154 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 10 23:35:03.237169 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 10 23:35:03.237190 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 10 23:35:03.237207 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 10 23:35:03.237229 kernel: psci: probing for conduit method from ACPI. Jul 10 23:35:03.237246 kernel: psci: PSCIv1.0 detected in firmware. Jul 10 23:35:03.237263 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:35:03.237284 kernel: psci: Trusted OS migration not required Jul 10 23:35:03.237301 kernel: psci: SMC Calling Convention v1.1 Jul 10 23:35:03.237318 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 10 23:35:03.237335 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 23:35:03.237352 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 23:35:03.237405 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 10 23:35:03.237423 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:35:03.237440 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:35:03.237457 kernel: CPU features: detected: Spectre-v2 Jul 10 23:35:03.237474 kernel: CPU features: detected: Spectre-v3a Jul 10 23:35:03.237490 kernel: CPU features: detected: Spectre-BHB Jul 10 23:35:03.237513 kernel: CPU features: detected: ARM erratum 1742098 Jul 10 23:35:03.237531 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 10 23:35:03.237548 kernel: alternatives: applying boot alternatives Jul 10 23:35:03.237567 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:35:03.237586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:35:03.237603 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:35:03.237620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:35:03.237637 kernel: Fallback order for Node 0: 0 Jul 10 23:35:03.237653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 10 23:35:03.237670 kernel: Policy zone: Normal Jul 10 23:35:03.237687 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:35:03.237708 kernel: software IO TLB: area num 2. Jul 10 23:35:03.237726 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 10 23:35:03.237743 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) Jul 10 23:35:03.237761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 23:35:03.237777 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:35:03.237795 kernel: rcu: RCU event tracing is enabled. Jul 10 23:35:03.237813 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 23:35:03.237830 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:35:03.237847 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:35:03.237864 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:35:03.237881 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 23:35:03.237901 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:35:03.237918 kernel: GICv3: 96 SPIs implemented Jul 10 23:35:03.237935 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:35:03.237952 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:35:03.237969 kernel: GICv3: GICv3 features: 16 PPIs Jul 10 23:35:03.237985 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 10 23:35:03.238002 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 10 23:35:03.238019 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 23:35:03.238036 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 23:35:03.238053 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 10 23:35:03.238070 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 10 23:35:03.238087 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 10 23:35:03.238108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:35:03.238125 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 10 23:35:03.238143 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 10 23:35:03.238160 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 10 23:35:03.238177 kernel: Console: colour dummy device 80x25 Jul 10 23:35:03.238194 kernel: printk: console [tty1] enabled Jul 10 23:35:03.238211 kernel: ACPI: Core revision 20230628 Jul 10 23:35:03.238229 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 10 23:35:03.238246 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:35:03.238264 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 23:35:03.238286 kernel: landlock: Up and running. Jul 10 23:35:03.238303 kernel: SELinux: Initializing. Jul 10 23:35:03.238320 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:35:03.238338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:35:03.238383 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:35:03.238406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:35:03.238424 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:35:03.238442 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:35:03.238459 kernel: Platform MSI: ITS@0x10080000 domain created Jul 10 23:35:03.238483 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 10 23:35:03.238500 kernel: Remapping and enabling EFI services. Jul 10 23:35:03.238517 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:35:03.238534 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:35:03.238552 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 10 23:35:03.238569 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 10 23:35:03.238587 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 10 23:35:03.238604 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 23:35:03.238621 kernel: SMP: Total of 2 processors activated. Jul 10 23:35:03.238643 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:35:03.238660 kernel: CPU features: detected: 32-bit EL1 Support Jul 10 23:35:03.238689 kernel: CPU features: detected: CRC32 instructions Jul 10 23:35:03.238711 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:35:03.238729 kernel: alternatives: applying system-wide alternatives Jul 10 23:35:03.238747 kernel: devtmpfs: initialized Jul 10 23:35:03.238765 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:35:03.238783 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 23:35:03.238801 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:35:03.238823 kernel: SMBIOS 3.0.0 present. Jul 10 23:35:03.238841 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 10 23:35:03.238859 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:35:03.238878 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:35:03.238896 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:35:03.238914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:35:03.238933 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:35:03.238955 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jul 10 23:35:03.238973 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:35:03.238991 kernel: cpuidle: using governor menu Jul 10 23:35:03.239009 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:35:03.239027 kernel: ASID allocator initialised with 65536 entries Jul 10 23:35:03.239044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:35:03.239062 kernel: Serial: AMBA PL011 UART driver Jul 10 23:35:03.239081 kernel: Modules: 17744 pages in range for non-PLT usage Jul 10 23:35:03.239098 kernel: Modules: 509264 pages in range for PLT usage Jul 10 23:35:03.239121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:35:03.239139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:35:03.239157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:35:03.239175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:35:03.239193 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:35:03.239211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:35:03.239229 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:35:03.239248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:35:03.239265 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:35:03.239287 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:35:03.239305 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:35:03.239323 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:35:03.239341 kernel: ACPI: Interpreter enabled Jul 10 23:35:03.241413 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:35:03.241445 kernel: ACPI: MCFG table detected, 1 entries Jul 10 23:35:03.241464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 10 23:35:03.241779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 23:35:03.241990 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 23:35:03.242187 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 23:35:03.242437 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 10 23:35:03.242681 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 10 23:35:03.242715 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 10 23:35:03.242735 kernel: acpiphp: Slot [1] registered Jul 10 23:35:03.242754 kernel: acpiphp: Slot [2] registered Jul 10 23:35:03.242773 kernel: acpiphp: Slot [3] registered Jul 10 23:35:03.242791 kernel: acpiphp: Slot [4] registered Jul 10 23:35:03.242820 kernel: acpiphp: Slot [5] registered Jul 10 23:35:03.242838 kernel: acpiphp: Slot [6] registered Jul 10 23:35:03.242856 kernel: acpiphp: Slot [7] registered Jul 10 23:35:03.242874 kernel: acpiphp: Slot [8] registered Jul 10 23:35:03.242892 kernel: acpiphp: Slot [9] registered Jul 10 23:35:03.242909 kernel: acpiphp: Slot [10] registered Jul 10 23:35:03.242927 kernel: acpiphp: Slot [11] registered Jul 10 23:35:03.242945 kernel: acpiphp: Slot [12] registered Jul 10 23:35:03.242963 kernel: acpiphp: Slot [13] registered Jul 10 23:35:03.242986 kernel: acpiphp: Slot [14] registered Jul 10 23:35:03.243004 kernel: acpiphp: Slot [15] registered Jul 10 23:35:03.243022 kernel: acpiphp: Slot [16] registered Jul 10 23:35:03.243040 kernel: acpiphp: Slot [17] registered Jul 10 23:35:03.243058 kernel: acpiphp: Slot [18] registered Jul 10 23:35:03.243075 kernel: acpiphp: Slot [19] registered Jul 10 23:35:03.243093 kernel: acpiphp: Slot [20] registered Jul 10 23:35:03.243111 kernel: acpiphp: Slot [21] registered Jul 10 23:35:03.243129 kernel: acpiphp: Slot [22] registered Jul 10 23:35:03.243147 kernel: acpiphp: Slot [23] registered Jul 10 23:35:03.243169 kernel: acpiphp: Slot [24] registered Jul 10 23:35:03.243187 kernel: acpiphp: Slot [25] registered Jul 10 23:35:03.243205 kernel: acpiphp: Slot [26] registered Jul 10 23:35:03.243223 kernel: acpiphp: Slot [27] registered Jul 10 23:35:03.243241 kernel: acpiphp: Slot [28] registered Jul 10 23:35:03.243259 kernel: acpiphp: Slot [29] registered Jul 10 23:35:03.243277 kernel: acpiphp: Slot [30] registered Jul 10 23:35:03.243295 kernel: acpiphp: Slot [31] registered Jul 10 23:35:03.243313 kernel: PCI host bridge to bus 0000:00 Jul 10 23:35:03.243643 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 10 23:35:03.243837 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 23:35:03.244036 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 10 23:35:03.244236 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 10 23:35:03.244638 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 10 23:35:03.244891 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 10 23:35:03.245106 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 10 23:35:03.245321 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 10 23:35:03.247649 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 10 23:35:03.247886 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 23:35:03.248117 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 10 23:35:03.248328 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 10 23:35:03.248594 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 10 23:35:03.248824 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 10 23:35:03.249032 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 23:35:03.249241 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 10 23:35:03.249532 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 10 23:35:03.249750 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 10 23:35:03.249970 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 10 23:35:03.250196 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 10 23:35:03.250481 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 10 23:35:03.250667 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 23:35:03.250854 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 10 23:35:03.250880 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 23:35:03.250899 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 23:35:03.250918 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 23:35:03.250937 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 23:35:03.250955 kernel: iommu: Default domain type: Translated Jul 10 23:35:03.250982 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:35:03.251000 kernel: efivars: Registered efivars operations Jul 10 23:35:03.251019 kernel: vgaarb: loaded Jul 10 23:35:03.251037 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:35:03.251055 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:35:03.251074 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:35:03.251092 kernel: pnp: PnP ACPI init Jul 10 23:35:03.251311 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 10 23:35:03.251351 kernel: pnp: PnP ACPI: found 1 devices Jul 10 23:35:03.251419 kernel: NET: Registered PF_INET protocol family Jul 10 23:35:03.251441 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:35:03.251461 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:35:03.251481 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:35:03.251501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:35:03.251520 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:35:03.251539 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:35:03.251557 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:35:03.251584 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:35:03.251604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:35:03.251623 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:35:03.251641 kernel: kvm [1]: HYP mode not available Jul 10 23:35:03.251660 kernel: Initialise system trusted keyrings Jul 10 23:35:03.251679 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:35:03.251698 kernel: Key type asymmetric registered Jul 10 23:35:03.251716 kernel: Asymmetric key parser 'x509' registered Jul 10 23:35:03.251735 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 23:35:03.251758 kernel: io scheduler mq-deadline registered Jul 10 23:35:03.251777 kernel: io scheduler kyber registered Jul 10 23:35:03.251796 kernel: io scheduler bfq registered Jul 10 23:35:03.252062 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 10 23:35:03.252092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 23:35:03.252111 kernel: ACPI: button: Power Button [PWRB] Jul 10 23:35:03.252130 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 10 23:35:03.252148 kernel: ACPI: button: Sleep Button [SLPB] Jul 10 23:35:03.252166 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:35:03.252192 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 10 23:35:03.252435 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 10 23:35:03.252465 kernel: printk: console [ttyS0] disabled Jul 10 23:35:03.252485 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 10 23:35:03.252505 kernel: printk: console [ttyS0] enabled Jul 10 23:35:03.252524 kernel: printk: bootconsole [uart0] disabled Jul 10 23:35:03.252543 kernel: thunder_xcv, ver 1.0 Jul 10 23:35:03.252581 kernel: thunder_bgx, ver 1.0 Jul 10 23:35:03.252603 kernel: nicpf, ver 1.0 Jul 10 23:35:03.252630 kernel: nicvf, ver 1.0 Jul 10 23:35:03.252904 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:35:03.253107 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:35:02 UTC (1752190502) Jul 10 23:35:03.253132 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:35:03.253152 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 10 23:35:03.253171 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 23:35:03.253189 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:35:03.253207 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:35:03.253233 kernel: Segment Routing with IPv6 Jul 10 23:35:03.253252 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:35:03.253270 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:35:03.253288 kernel: Key type dns_resolver registered Jul 10 23:35:03.253307 kernel: registered taskstats version 1 Jul 10 23:35:03.253325 kernel: Loading compiled-in X.509 certificates Jul 10 23:35:03.253344 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 31389229b1c1b066a3aecee2ec344e038e2f2cc0' Jul 10 23:35:03.253465 kernel: Key type .fscrypt registered Jul 10 23:35:03.253486 kernel: Key type fscrypt-provisioning registered Jul 10 23:35:03.253511 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:35:03.253530 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:35:03.253548 kernel: ima: No architecture policies found Jul 10 23:35:03.253566 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:35:03.253583 kernel: clk: Disabling unused clocks Jul 10 23:35:03.253601 kernel: Freeing unused kernel memory: 38336K Jul 10 23:35:03.253619 kernel: Run /init as init process Jul 10 23:35:03.253637 kernel: with arguments: Jul 10 23:35:03.253655 kernel: /init Jul 10 23:35:03.253677 kernel: with environment: Jul 10 23:35:03.253696 kernel: HOME=/ Jul 10 23:35:03.253714 kernel: TERM=linux Jul 10 23:35:03.253733 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:35:03.253753 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:35:03.253778 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:35:03.253799 systemd[1]: Detected virtualization amazon. Jul 10 23:35:03.253822 systemd[1]: Detected architecture arm64. Jul 10 23:35:03.253842 systemd[1]: Running in initrd. Jul 10 23:35:03.253861 systemd[1]: No hostname configured, using default hostname. Jul 10 23:35:03.253881 systemd[1]: Hostname set to . Jul 10 23:35:03.253901 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:35:03.253921 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:35:03.253940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:03.253960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:03.253981 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:35:03.254006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:35:03.254026 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:35:03.254048 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:35:03.254069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:35:03.254090 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:35:03.254110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:03.254134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:03.254154 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:35:03.254174 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:35:03.254194 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:35:03.254213 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:35:03.254233 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:35:03.254253 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:35:03.254273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:35:03.254293 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:35:03.254318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:03.254338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:03.254379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:03.254405 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:35:03.254425 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:35:03.254446 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:35:03.254467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:35:03.254487 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:35:03.254507 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:35:03.254534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:35:03.254554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:03.254574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:35:03.254594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:03.254615 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:35:03.254639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:35:03.254715 systemd-journald[251]: Collecting audit messages is disabled. Jul 10 23:35:03.254760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:03.254787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:03.254809 systemd-journald[251]: Journal started Jul 10 23:35:03.254849 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e7a31bc36126551e52dfe6f7976be) is 8M, max 75.3M, 67.3M free. Jul 10 23:35:03.218029 systemd-modules-load[252]: Inserted module 'overlay' Jul 10 23:35:03.270198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:35:03.270273 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:35:03.273820 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 10 23:35:03.275984 kernel: Bridge firewalling registered Jul 10 23:35:03.281296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:03.286033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:35:03.292755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:03.301694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:35:03.311694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:35:03.340500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:03.341148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:03.367929 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:03.371047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:03.381709 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:35:03.396100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:35:03.421810 dracut-cmdline[288]: dracut-dracut-053 Jul 10 23:35:03.428667 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:35:03.496749 systemd-resolved[289]: Positive Trust Anchors: Jul 10 23:35:03.496786 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:35:03.496846 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:35:03.593398 kernel: SCSI subsystem initialized Jul 10 23:35:03.602381 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:35:03.613394 kernel: iscsi: registered transport (tcp) Jul 10 23:35:03.635394 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:35:03.636389 kernel: QLogic iSCSI HBA Driver Jul 10 23:35:03.729395 kernel: random: crng init done Jul 10 23:35:03.730095 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 10 23:35:03.734003 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:35:03.737382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:03.758418 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:35:03.768668 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:35:03.803959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:35:03.804049 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:35:03.805949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 23:35:03.871424 kernel: raid6: neonx8 gen() 6565 MB/s Jul 10 23:35:03.888393 kernel: raid6: neonx4 gen() 6541 MB/s Jul 10 23:35:03.905393 kernel: raid6: neonx2 gen() 5427 MB/s Jul 10 23:35:03.922396 kernel: raid6: neonx1 gen() 3946 MB/s Jul 10 23:35:03.939392 kernel: raid6: int64x8 gen() 3621 MB/s Jul 10 23:35:03.956394 kernel: raid6: int64x4 gen() 3717 MB/s Jul 10 23:35:03.973391 kernel: raid6: int64x2 gen() 3604 MB/s Jul 10 23:35:03.991375 kernel: raid6: int64x1 gen() 2758 MB/s Jul 10 23:35:03.991412 kernel: raid6: using algorithm neonx8 gen() 6565 MB/s Jul 10 23:35:04.010376 kernel: raid6: .... xor() 4713 MB/s, rmw enabled Jul 10 23:35:04.010435 kernel: raid6: using neon recovery algorithm Jul 10 23:35:04.019055 kernel: xor: measuring software checksum speed Jul 10 23:35:04.019122 kernel: 8regs : 12951 MB/sec Jul 10 23:35:04.020252 kernel: 32regs : 13051 MB/sec Jul 10 23:35:04.022541 kernel: arm64_neon : 8757 MB/sec Jul 10 23:35:04.022576 kernel: xor: using function: 32regs (13051 MB/sec) Jul 10 23:35:04.105414 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:35:04.124192 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:35:04.135665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:04.179784 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 10 23:35:04.190998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:04.207786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:35:04.237985 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 10 23:35:04.294527 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:35:04.304803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:35:04.426986 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:04.454691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:35:04.505994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:35:04.515631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:35:04.520918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:04.524408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:35:04.536029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:35:04.569592 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:35:04.641735 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 23:35:04.641804 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 10 23:35:04.648278 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 10 23:35:04.648682 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 10 23:35:04.662396 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bd:de:f3:8d:0f Jul 10 23:35:04.666779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:35:04.667018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:04.669995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:04.672412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:35:04.672693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:04.676848 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:04.697747 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:35:04.700133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:04.707002 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:35:04.731119 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 10 23:35:04.731189 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 10 23:35:04.739423 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 23:35:04.741583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:04.752713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:04.764720 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 23:35:04.764775 kernel: GPT:9289727 != 16777215 Jul 10 23:35:04.764801 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 23:35:04.770213 kernel: GPT:9289727 != 16777215 Jul 10 23:35:04.770280 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 23:35:04.770305 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 23:35:04.790016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:04.887408 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (515) Jul 10 23:35:04.914414 kernel: BTRFS: device fsid 28ea517e-145c-4223-93e8-6347aefbc032 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (526) Jul 10 23:35:04.967143 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 10 23:35:05.037763 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 10 23:35:05.064214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 23:35:05.085210 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 10 23:35:05.087978 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 10 23:35:05.104726 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:35:05.120388 disk-uuid[661]: Primary Header is updated. Jul 10 23:35:05.120388 disk-uuid[661]: Secondary Entries is updated. Jul 10 23:35:05.120388 disk-uuid[661]: Secondary Header is updated. Jul 10 23:35:05.128729 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 23:35:05.155522 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 23:35:06.165383 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 23:35:06.166940 disk-uuid[662]: The operation has completed successfully. Jul 10 23:35:06.377274 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:35:06.377535 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:35:06.453621 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:35:06.464482 sh[920]: Success Jul 10 23:35:06.486500 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 23:35:06.607177 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:35:06.626604 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:35:06.631440 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:35:06.676044 kernel: BTRFS info (device dm-0): first mount of filesystem 28ea517e-145c-4223-93e8-6347aefbc032 Jul 10 23:35:06.676115 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:06.678111 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 23:35:06.679555 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 23:35:06.680735 kernel: BTRFS info (device dm-0): using free space tree Jul 10 23:35:06.800392 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 10 23:35:06.824950 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:35:06.825470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:35:06.837723 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:35:06.839989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:35:06.890439 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:06.890524 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:06.891859 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 10 23:35:06.907398 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 10 23:35:06.916387 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:06.918726 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:35:06.933779 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:35:07.016198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:35:07.029741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:35:07.100273 systemd-networkd[1122]: lo: Link UP Jul 10 23:35:07.100286 systemd-networkd[1122]: lo: Gained carrier Jul 10 23:35:07.103303 systemd-networkd[1122]: Enumeration completed Jul 10 23:35:07.103472 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:35:07.104318 systemd-networkd[1122]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:07.104325 systemd-networkd[1122]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:07.109263 systemd[1]: Reached target network.target - Network. Jul 10 23:35:07.125293 systemd-networkd[1122]: eth0: Link UP Jul 10 23:35:07.125600 systemd-networkd[1122]: eth0: Gained carrier Jul 10 23:35:07.125620 systemd-networkd[1122]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:07.148436 systemd-networkd[1122]: eth0: DHCPv4 address 172.31.24.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 23:35:07.398121 ignition[1053]: Ignition 2.20.0 Jul 10 23:35:07.398142 ignition[1053]: Stage: fetch-offline Jul 10 23:35:07.399178 ignition[1053]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:07.404909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:35:07.399202 ignition[1053]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:07.415779 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 23:35:07.399888 ignition[1053]: Ignition finished successfully Jul 10 23:35:07.449060 ignition[1135]: Ignition 2.20.0 Jul 10 23:35:07.449090 ignition[1135]: Stage: fetch Jul 10 23:35:07.450841 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:07.450869 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:07.452216 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:07.466486 ignition[1135]: PUT result: OK Jul 10 23:35:07.469595 ignition[1135]: parsed url from cmdline: "" Jul 10 23:35:07.469727 ignition[1135]: no config URL provided Jul 10 23:35:07.469746 ignition[1135]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:35:07.469772 ignition[1135]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:35:07.469804 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:07.471701 ignition[1135]: PUT result: OK Jul 10 23:35:07.473981 ignition[1135]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 10 23:35:07.484413 ignition[1135]: GET result: OK Jul 10 23:35:07.484589 ignition[1135]: parsing config with SHA512: f5568abbd6cfcbb444c9956627e17968ef3a2167675d85810274e704ec1eea26a639b1877c5b1b3bd1d182d796eff08f6fd4ee51195e7515ae1e661e15439b87 Jul 10 23:35:07.500395 unknown[1135]: fetched base config from "system" Jul 10 23:35:07.500422 unknown[1135]: fetched base config from "system" Jul 10 23:35:07.501927 ignition[1135]: fetch: fetch complete Jul 10 23:35:07.500445 unknown[1135]: fetched user config from "aws" Jul 10 23:35:07.501941 ignition[1135]: fetch: fetch passed Jul 10 23:35:07.507736 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 23:35:07.502048 ignition[1135]: Ignition finished successfully Jul 10 23:35:07.524789 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:35:07.552342 ignition[1141]: Ignition 2.20.0 Jul 10 23:35:07.552399 ignition[1141]: Stage: kargs Jul 10 23:35:07.553074 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:07.553102 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:07.553255 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:07.560576 ignition[1141]: PUT result: OK Jul 10 23:35:07.567224 ignition[1141]: kargs: kargs passed Jul 10 23:35:07.567398 ignition[1141]: Ignition finished successfully Jul 10 23:35:07.574045 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:35:07.584672 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:35:07.612834 ignition[1147]: Ignition 2.20.0 Jul 10 23:35:07.612864 ignition[1147]: Stage: disks Jul 10 23:35:07.613522 ignition[1147]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:07.613548 ignition[1147]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:07.613697 ignition[1147]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:07.617245 ignition[1147]: PUT result: OK Jul 10 23:35:07.629291 ignition[1147]: disks: disks passed Jul 10 23:35:07.629415 ignition[1147]: Ignition finished successfully Jul 10 23:35:07.633047 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:35:07.637760 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:35:07.644774 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:35:07.647403 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:35:07.649934 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:35:07.654653 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:35:07.671726 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:35:07.715193 systemd-fsck[1156]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 23:35:07.722712 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:35:07.735587 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:35:07.813449 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ef1c88fa-d23e-4a16-bbbf-07c92f8585ec r/w with ordered data mode. Quota mode: none. Jul 10 23:35:07.815046 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:35:07.821821 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:35:07.838567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:35:07.846621 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:35:07.851996 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 23:35:07.852086 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:35:07.852138 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:35:07.872572 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1175) Jul 10 23:35:07.876666 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:07.876725 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:07.876752 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 10 23:35:07.883424 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:35:07.894351 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 10 23:35:07.894637 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:35:07.902932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:35:08.259535 systemd-networkd[1122]: eth0: Gained IPv6LL Jul 10 23:35:08.371833 initrd-setup-root[1199]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:35:08.392723 initrd-setup-root[1206]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:35:08.401103 initrd-setup-root[1213]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:35:08.410133 initrd-setup-root[1220]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:35:08.758865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:35:08.772642 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:35:08.780958 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:35:08.794140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:35:08.797506 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:08.837762 ignition[1288]: INFO : Ignition 2.20.0 Jul 10 23:35:08.837762 ignition[1288]: INFO : Stage: mount Jul 10 23:35:08.842553 ignition[1288]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:08.842553 ignition[1288]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:08.842553 ignition[1288]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:08.842553 ignition[1288]: INFO : PUT result: OK Jul 10 23:35:08.854709 ignition[1288]: INFO : mount: mount passed Jul 10 23:35:08.858034 ignition[1288]: INFO : Ignition finished successfully Jul 10 23:35:08.859937 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:35:08.864784 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:35:08.873654 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:35:08.897742 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:35:08.923394 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1300) Jul 10 23:35:08.927591 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:08.927647 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:08.927685 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 10 23:35:08.934403 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 10 23:35:08.937660 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:35:08.970776 ignition[1317]: INFO : Ignition 2.20.0 Jul 10 23:35:08.972893 ignition[1317]: INFO : Stage: files Jul 10 23:35:08.972893 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:08.972893 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:08.972893 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:08.982504 ignition[1317]: INFO : PUT result: OK Jul 10 23:35:08.987430 ignition[1317]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:35:09.004480 ignition[1317]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:35:09.004480 ignition[1317]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:35:09.048794 ignition[1317]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:35:09.052039 ignition[1317]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:35:09.055455 unknown[1317]: wrote ssh authorized keys file for user: core Jul 10 23:35:09.057897 ignition[1317]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:35:09.061350 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 23:35:09.065528 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 10 23:35:09.173065 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:35:09.349538 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 23:35:09.354015 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:35:09.354015 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 23:35:09.907049 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:35:10.142426 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:35:10.169117 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 23:35:10.848341 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 23:35:11.194105 ignition[1317]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:35:11.194105 ignition[1317]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:35:11.204744 ignition[1317]: INFO : files: files passed Jul 10 23:35:11.204744 ignition[1317]: INFO : Ignition finished successfully Jul 10 23:35:11.232877 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:35:11.244756 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:35:11.259903 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:35:11.269251 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:35:11.269487 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:35:11.296325 initrd-setup-root-after-ignition[1345]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:11.296325 initrd-setup-root-after-ignition[1345]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:11.304017 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:11.310405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:35:11.316541 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:35:11.327704 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:35:11.378516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:35:11.378899 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:35:11.387863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:35:11.390144 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:35:11.392421 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:35:11.401273 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:35:11.439294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:35:11.453758 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:35:11.485559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:35:11.485769 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:35:11.491892 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:11.498874 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:11.501525 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:35:11.503567 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:35:11.503676 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:35:11.506464 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:35:11.508783 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:35:11.510693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:35:11.513268 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:35:11.515720 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:35:11.518125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:35:11.520315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:35:11.522989 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:35:11.525241 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:35:11.527407 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:35:11.529242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:35:11.529342 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:35:11.529912 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:11.530210 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:11.530905 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:35:11.536159 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:11.536259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:35:11.536346 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:35:11.536636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:35:11.536716 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:35:11.536822 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:35:11.536896 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:35:11.566832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:35:11.572589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:35:11.572705 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:11.577294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:35:11.613714 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:35:11.613844 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:11.643779 ignition[1370]: INFO : Ignition 2.20.0 Jul 10 23:35:11.643779 ignition[1370]: INFO : Stage: umount Jul 10 23:35:11.616577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:35:11.649754 ignition[1370]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:11.649754 ignition[1370]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 23:35:11.649754 ignition[1370]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 23:35:11.616682 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:35:11.661500 ignition[1370]: INFO : PUT result: OK Jul 10 23:35:11.664468 ignition[1370]: INFO : umount: umount passed Jul 10 23:35:11.667529 ignition[1370]: INFO : Ignition finished successfully Jul 10 23:35:11.670761 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:35:11.673571 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:35:11.678303 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:35:11.678564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:35:11.682640 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:35:11.682735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:35:11.685017 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 23:35:11.685123 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 23:35:11.687435 systemd[1]: Stopped target network.target - Network. Jul 10 23:35:11.687544 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:35:11.687652 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:35:11.688316 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:35:11.694951 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:35:11.707601 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:11.711012 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:35:11.716620 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:35:11.723452 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:35:11.723542 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:35:11.732056 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:35:11.732136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:35:11.734722 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:35:11.734821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:35:11.737020 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:35:11.737101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:35:11.739540 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:35:11.742760 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:35:11.758441 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:35:11.766742 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:35:11.766939 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:35:11.787471 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:35:11.788013 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:35:11.788253 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:35:11.798042 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:35:11.798530 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:35:11.798706 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:35:11.823678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:35:11.823770 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:11.826531 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:35:11.826627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:35:11.834263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:35:11.845139 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:35:11.845269 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:35:11.848910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:35:11.848996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:11.862435 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:35:11.862534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:11.864893 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:35:11.864981 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:11.871226 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:11.888955 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:35:11.889770 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:35:11.904443 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:35:11.904785 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:11.911968 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:35:11.912079 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:11.920847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:35:11.920925 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:11.923763 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:35:11.923861 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:35:11.934779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:35:11.934887 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:35:11.937401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:35:11.937492 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:11.956759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:35:11.959275 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:35:11.959452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:11.965834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:35:11.965950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:11.984752 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 23:35:11.984919 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:35:11.985846 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:35:11.988463 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:35:11.996661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:35:11.996837 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:35:12.000919 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:35:12.026750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:35:12.044947 systemd[1]: Switching root. Jul 10 23:35:12.104039 systemd-journald[251]: Journal stopped Jul 10 23:35:14.909766 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 10 23:35:14.909918 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:35:14.909969 kernel: SELinux: policy capability open_perms=1 Jul 10 23:35:14.910000 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:35:14.910030 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:35:14.910059 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:35:14.910089 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:35:14.910118 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:35:14.910146 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:35:14.910176 kernel: audit: type=1403 audit(1752190512.674:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:35:14.910220 systemd[1]: Successfully loaded SELinux policy in 81.614ms. Jul 10 23:35:14.910272 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.750ms. Jul 10 23:35:14.910308 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:35:14.910347 systemd[1]: Detected virtualization amazon. Jul 10 23:35:14.914187 systemd[1]: Detected architecture arm64. Jul 10 23:35:14.914229 systemd[1]: Detected first boot. Jul 10 23:35:14.914262 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:35:14.914291 zram_generator::config[1413]: No configuration found. Jul 10 23:35:14.914342 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:35:14.923806 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:35:14.923857 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:35:14.923890 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:35:14.923922 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:35:14.923962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:35:14.923994 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:35:14.924025 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:35:14.924057 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:35:14.924094 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:35:14.924125 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:35:14.924155 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:35:14.924186 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:35:14.924218 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:35:14.924248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:14.924277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:14.924306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:35:14.924334 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:35:14.925448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:35:14.925496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:35:14.925529 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 23:35:14.925558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:14.925587 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:35:14.925615 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:35:14.925647 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:35:14.925685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:35:14.925717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:14.926435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:35:14.926478 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:35:14.926513 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:35:14.926546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:35:14.926577 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:35:14.926606 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:35:14.926635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:14.926671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:14.926702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:14.926733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:35:14.926761 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:35:14.926791 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:35:14.926819 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:35:14.926848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:35:14.926876 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:35:14.926906 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:35:14.926941 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:35:14.926971 systemd[1]: Reached target machines.target - Containers. Jul 10 23:35:14.926999 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:35:14.927033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:14.927065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:35:14.927095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:35:14.927124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:35:14.927152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:35:14.927183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:35:14.927218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:35:14.927247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:35:14.927279 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:35:14.927308 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:35:14.927339 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:35:14.940302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:35:14.940350 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:35:14.940404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:14.940445 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:35:14.940492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:35:14.940524 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:35:14.940554 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:35:14.940583 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:35:14.940614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:35:14.940644 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:35:14.940673 systemd[1]: Stopped verity-setup.service. Jul 10 23:35:14.940708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:35:14.940737 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:35:14.940767 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:35:14.940798 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:35:14.940827 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:35:14.940859 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:35:14.940888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:14.940922 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:35:14.940951 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:35:14.940981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:35:14.941011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:35:14.941044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:35:14.941073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:35:14.941101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:14.941131 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:35:14.941160 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:35:14.941188 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:35:14.941216 kernel: loop: module loaded Jul 10 23:35:14.941246 kernel: fuse: init (API version 7.39) Jul 10 23:35:14.941276 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:35:14.941310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:35:14.941342 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:35:14.948487 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:35:14.948588 systemd-journald[1500]: Collecting audit messages is disabled. Jul 10 23:35:14.948640 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:35:14.948671 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:35:14.948700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:14.948737 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:35:14.948770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:35:14.948800 kernel: ACPI: bus type drm_connector registered Jul 10 23:35:14.948829 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:35:14.948858 systemd-journald[1500]: Journal started Jul 10 23:35:14.948910 systemd-journald[1500]: Runtime Journal (/run/log/journal/ec2e7a31bc36126551e52dfe6f7976be) is 8M, max 75.3M, 67.3M free. Jul 10 23:35:14.955506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:14.210588 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:35:14.225104 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 23:35:14.225968 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:35:14.982278 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:35:14.982392 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:35:14.991260 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:35:14.995043 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:35:14.995427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:35:14.999554 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:35:14.999938 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:35:15.002970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:35:15.003460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:35:15.007259 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:35:15.010321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:35:15.016988 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:35:15.058455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:35:15.085417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:35:15.102580 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:35:15.112579 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:35:15.123950 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:35:15.128493 kernel: loop0: detected capacity change from 0 to 123192 Jul 10 23:35:15.127238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:35:15.133812 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:35:15.141241 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:35:15.154531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:15.205808 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:35:15.224005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:15.234615 systemd-journald[1500]: Time spent on flushing to /var/log/journal/ec2e7a31bc36126551e52dfe6f7976be is 51.408ms for 929 entries. Jul 10 23:35:15.234615 systemd-journald[1500]: System Journal (/var/log/journal/ec2e7a31bc36126551e52dfe6f7976be) is 8M, max 195.6M, 187.6M free. Jul 10 23:35:15.301836 systemd-journald[1500]: Received client request to flush runtime journal. Jul 10 23:35:15.301957 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:35:15.301993 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 23:35:15.230614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:35:15.241896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 23:35:15.298653 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 23:35:15.305618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:35:15.312634 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:35:15.326629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:35:15.377566 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Jul 10 23:35:15.377598 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Jul 10 23:35:15.390759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:15.550781 kernel: loop2: detected capacity change from 0 to 113512 Jul 10 23:35:15.672399 kernel: loop3: detected capacity change from 0 to 53784 Jul 10 23:35:15.791406 kernel: loop4: detected capacity change from 0 to 123192 Jul 10 23:35:15.805445 kernel: loop5: detected capacity change from 0 to 207008 Jul 10 23:35:15.846493 kernel: loop6: detected capacity change from 0 to 113512 Jul 10 23:35:15.863597 kernel: loop7: detected capacity change from 0 to 53784 Jul 10 23:35:15.885402 (sd-merge)[1574]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 10 23:35:15.889908 (sd-merge)[1574]: Merged extensions into '/usr'. Jul 10 23:35:15.896298 systemd[1]: Reload requested from client PID 1529 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:35:15.896556 systemd[1]: Reloading... Jul 10 23:35:16.135406 zram_generator::config[1612]: No configuration found. Jul 10 23:35:16.436535 ldconfig[1524]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:35:16.448249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:16.596427 systemd[1]: Reloading finished in 698 ms. Jul 10 23:35:16.617952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:35:16.621036 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:35:16.624244 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:35:16.644646 systemd[1]: Starting ensure-sysext.service... Jul 10 23:35:16.651756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:35:16.659757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:16.684493 systemd[1]: Reload requested from client PID 1656 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:35:16.684522 systemd[1]: Reloading... Jul 10 23:35:16.736667 systemd-udevd[1658]: Using default interface naming scheme 'v255'. Jul 10 23:35:16.755093 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:35:16.755719 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:35:16.760728 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:35:16.761299 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jul 10 23:35:16.763705 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jul 10 23:35:16.773818 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:35:16.773847 systemd-tmpfiles[1657]: Skipping /boot Jul 10 23:35:16.825580 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:35:16.830446 systemd-tmpfiles[1657]: Skipping /boot Jul 10 23:35:16.935399 zram_generator::config[1706]: No configuration found. Jul 10 23:35:17.089479 (udev-worker)[1678]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:35:17.314451 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1686) Jul 10 23:35:17.346704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:17.571538 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 23:35:17.572032 systemd[1]: Reloading finished in 886 ms. Jul 10 23:35:17.600088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:17.643037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:17.679378 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 23:35:17.691331 systemd[1]: Finished ensure-sysext.service. Jul 10 23:35:17.751936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 23:35:17.762698 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:35:17.772768 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:35:17.776177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:17.779692 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 23:35:17.786703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:35:17.798768 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:35:17.804789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:35:17.810627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:35:17.813378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:17.817705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:35:17.820604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:17.846398 lvm[1855]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:35:17.848713 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:35:17.865712 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:35:17.874929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:35:17.877320 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:35:17.884733 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:35:17.889629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:17.914119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:35:17.914565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:35:17.952653 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:35:17.957887 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 23:35:17.962067 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:35:17.962610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:35:17.972331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:17.982757 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 23:35:18.000202 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:35:18.005470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:35:18.011253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:35:18.013833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:35:18.021151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:35:18.024567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:35:18.025753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:35:18.045479 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:35:18.075434 lvm[1878]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:35:18.080171 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:35:18.083332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:35:18.095242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:35:18.107825 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:35:18.142266 augenrules[1900]: No rules Jul 10 23:35:18.145829 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:35:18.148497 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:35:18.151492 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:35:18.155047 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 23:35:18.175546 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:35:18.287351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:18.314898 systemd-networkd[1868]: lo: Link UP Jul 10 23:35:18.314915 systemd-networkd[1868]: lo: Gained carrier Jul 10 23:35:18.318601 systemd-networkd[1868]: Enumeration completed Jul 10 23:35:18.318909 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:35:18.319731 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:18.319739 systemd-networkd[1868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:18.325708 systemd-networkd[1868]: eth0: Link UP Jul 10 23:35:18.326128 systemd-networkd[1868]: eth0: Gained carrier Jul 10 23:35:18.326270 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:18.333974 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:35:18.348676 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:35:18.350011 systemd-networkd[1868]: eth0: DHCPv4 address 172.31.24.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 23:35:18.375788 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:35:18.379156 systemd-resolved[1869]: Positive Trust Anchors: Jul 10 23:35:18.379193 systemd-resolved[1869]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:35:18.379254 systemd-resolved[1869]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:35:18.393303 systemd-resolved[1869]: Defaulting to hostname 'linux'. Jul 10 23:35:18.396623 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:35:18.399169 systemd[1]: Reached target network.target - Network. Jul 10 23:35:18.401217 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:18.403836 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:35:18.406257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:35:18.408898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:35:18.411832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:35:18.414418 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:35:18.417144 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:35:18.419850 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:35:18.419899 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:35:18.421882 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:35:18.426058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:35:18.430937 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:35:18.438900 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:35:18.442226 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:35:18.446323 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:35:18.456668 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:35:18.459641 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:35:18.463712 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:35:18.466287 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:35:18.468606 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:35:18.470614 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:35:18.470667 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:35:18.479699 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:35:18.486601 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 23:35:18.494712 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:35:18.503864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:35:18.522448 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:35:18.525050 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:35:18.528689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:35:18.553581 jq[1927]: false Jul 10 23:35:18.553779 systemd[1]: Started ntpd.service - Network Time Service. Jul 10 23:35:18.566543 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:35:18.574648 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 10 23:35:18.587741 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:35:18.593273 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:35:18.608773 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:35:18.613530 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:35:18.614494 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:35:18.619593 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:35:18.624479 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:35:18.632205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:35:18.632727 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:35:18.708931 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:35:18.709413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:35:18.713283 (ntainerd)[1952]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:35:18.718418 extend-filesystems[1928]: Found loop4 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found loop5 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found loop6 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found loop7 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p1 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p2 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p3 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found usr Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p4 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p6 Jul 10 23:35:18.718418 extend-filesystems[1928]: Found nvme0n1p7 Jul 10 23:35:18.752739 extend-filesystems[1928]: Found nvme0n1p9 Jul 10 23:35:18.752739 extend-filesystems[1928]: Checking size of /dev/nvme0n1p9 Jul 10 23:35:18.766728 dbus-daemon[1926]: [system] SELinux support is enabled Jul 10 23:35:18.767689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:35:18.773048 jq[1941]: true Jul 10 23:35:18.777032 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:35:18.777135 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:35:18.780022 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:35:18.780068 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:35:18.803824 tar[1943]: linux-arm64/LICENSE Jul 10 23:35:18.803824 tar[1943]: linux-arm64/helm Jul 10 23:35:18.815958 dbus-daemon[1926]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1868 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 10 23:35:18.816773 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:35:18.819134 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:35:18.831035 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 23:35:18.839316 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 10 23:35:18.871416 jq[1966]: true Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Thu Jul 10 21:34:46 UTC 2025 (1): Starting Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: ---------------------------------------------------- Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: corporation. Support and training for ntp-4 are Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: available at https://www.nwtime.org/support Jul 10 23:35:18.871825 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: ---------------------------------------------------- Jul 10 23:35:18.870692 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Thu Jul 10 21:34:46 UTC 2025 (1): Starting Jul 10 23:35:18.870739 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 23:35:18.870760 ntpd[1930]: ---------------------------------------------------- Jul 10 23:35:18.870779 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jul 10 23:35:18.870797 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 23:35:18.870814 ntpd[1930]: corporation. Support and training for ntp-4 are Jul 10 23:35:18.870832 ntpd[1930]: available at https://www.nwtime.org/support Jul 10 23:35:18.870849 ntpd[1930]: ---------------------------------------------------- Jul 10 23:35:18.881413 extend-filesystems[1928]: Resized partition /dev/nvme0n1p9 Jul 10 23:35:18.903840 ntpd[1930]: proto: precision = 0.096 usec (-23) Jul 10 23:35:18.904053 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: proto: precision = 0.096 usec (-23) Jul 10 23:35:18.904335 extend-filesystems[1978]: resize2fs 1.47.1 (20-May-2024) Jul 10 23:35:18.907732 ntpd[1930]: basedate set to 2025-06-28 Jul 10 23:35:18.915074 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: basedate set to 2025-06-28 Jul 10 23:35:18.915074 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: gps base set to 2025-06-29 (week 2373) Jul 10 23:35:18.907767 ntpd[1930]: gps base set to 2025-06-29 (week 2373) Jul 10 23:35:18.928258 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 23:35:18.928385 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 23:35:18.928526 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 23:35:18.928526 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 23:35:18.933088 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 10 23:35:18.936648 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 23:35:18.936744 ntpd[1930]: Listen normally on 3 eth0 172.31.24.228:123 Jul 10 23:35:18.936841 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 23:35:18.936841 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listen normally on 3 eth0 172.31.24.228:123 Jul 10 23:35:18.936841 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jul 10 23:35:18.936811 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jul 10 23:35:18.937047 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: bind(21) AF_INET6 fe80::4bd:deff:fef3:8d0f%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 23:35:18.937047 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: unable to create socket on eth0 (5) for fe80::4bd:deff:fef3:8d0f%2#123 Jul 10 23:35:18.937047 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: failed to init interface for address fe80::4bd:deff:fef3:8d0f%2 Jul 10 23:35:18.937047 ntpd[1930]: 10 Jul 23:35:18 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Jul 10 23:35:18.936889 ntpd[1930]: bind(21) AF_INET6 fe80::4bd:deff:fef3:8d0f%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 23:35:18.936928 ntpd[1930]: unable to create socket on eth0 (5) for fe80::4bd:deff:fef3:8d0f%2#123 Jul 10 23:35:18.936956 ntpd[1930]: failed to init interface for address fe80::4bd:deff:fef3:8d0f%2 Jul 10 23:35:18.937010 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Jul 10 23:35:18.938518 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 10 23:35:18.964816 update_engine[1939]: I20250710 23:35:18.964465 1939 main.cc:92] Flatcar Update Engine starting Jul 10 23:35:18.989824 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:35:18.995166 update_engine[1939]: I20250710 23:35:18.994719 1939 update_check_scheduler.cc:74] Next update check in 7m29s Jul 10 23:35:19.013748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:35:19.021776 ntpd[1930]: 10 Jul 23:35:19 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 23:35:19.021776 ntpd[1930]: 10 Jul 23:35:19 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 23:35:18.987352 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 23:35:19.021463 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 23:35:19.074224 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 10 23:35:19.088870 extend-filesystems[1978]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 10 23:35:19.088870 extend-filesystems[1978]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 23:35:19.088870 extend-filesystems[1978]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 10 23:35:19.100633 extend-filesystems[1928]: Resized filesystem in /dev/nvme0n1p9 Jul 10 23:35:19.103629 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:35:19.104258 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:35:19.131192 coreos-metadata[1925]: Jul 10 23:35:19.129 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 23:35:19.132766 coreos-metadata[1925]: Jul 10 23:35:19.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 10 23:35:19.133222 coreos-metadata[1925]: Jul 10 23:35:19.133 INFO Fetch successful Jul 10 23:35:19.133303 coreos-metadata[1925]: Jul 10 23:35:19.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 10 23:35:19.134630 coreos-metadata[1925]: Jul 10 23:35:19.134 INFO Fetch successful Jul 10 23:35:19.134630 coreos-metadata[1925]: Jul 10 23:35:19.134 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 10 23:35:19.135862 coreos-metadata[1925]: Jul 10 23:35:19.135 INFO Fetch successful Jul 10 23:35:19.135862 coreos-metadata[1925]: Jul 10 23:35:19.135 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetch successful Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetch failed with 404: resource not found Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetch successful Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetch successful Jul 10 23:35:19.144934 coreos-metadata[1925]: Jul 10 23:35:19.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 10 23:35:19.152646 coreos-metadata[1925]: Jul 10 23:35:19.150 INFO Fetch successful Jul 10 23:35:19.152646 coreos-metadata[1925]: Jul 10 23:35:19.150 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 10 23:35:19.152646 coreos-metadata[1925]: Jul 10 23:35:19.150 INFO Fetch successful Jul 10 23:35:19.152646 coreos-metadata[1925]: Jul 10 23:35:19.150 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 10 23:35:19.152646 coreos-metadata[1925]: Jul 10 23:35:19.150 INFO Fetch successful Jul 10 23:35:19.177381 bash[2007]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:35:19.186717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:35:19.204455 systemd[1]: Starting sshkeys.service... Jul 10 23:35:19.249468 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1675) Jul 10 23:35:19.318583 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 23:35:19.327811 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 23:35:19.342470 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 23:35:19.345898 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:35:19.398857 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:35:19.485617 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 23:35:19.485661 systemd-logind[1938]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 10 23:35:19.485994 systemd-logind[1938]: New seat seat0. Jul 10 23:35:19.494634 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:35:19.587534 systemd-networkd[1868]: eth0: Gained IPv6LL Jul 10 23:35:19.600715 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:35:19.611919 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:35:19.625082 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 10 23:35:19.634475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:19.640666 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:35:19.713090 containerd[1952]: time="2025-07-10T23:35:19.711193081Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 10 23:35:19.767446 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:35:19.776596 coreos-metadata[2036]: Jul 10 23:35:19.772 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 23:35:19.776596 coreos-metadata[2036]: Jul 10 23:35:19.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 10 23:35:19.776596 coreos-metadata[2036]: Jul 10 23:35:19.775 INFO Fetch successful Jul 10 23:35:19.776596 coreos-metadata[2036]: Jul 10 23:35:19.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 10 23:35:19.787396 coreos-metadata[2036]: Jul 10 23:35:19.778 INFO Fetch successful Jul 10 23:35:19.787767 unknown[2036]: wrote ssh authorized keys file for user: core Jul 10 23:35:19.843386 update-ssh-keys[2118]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:35:19.845503 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 23:35:19.865899 locksmithd[1988]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:35:19.872579 systemd[1]: Finished sshkeys.service. Jul 10 23:35:19.973407 containerd[1952]: time="2025-07-10T23:35:19.972068102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:19.987228 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 10 23:35:20.002687 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: Initializing new seelog logger Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: New Seelog Logger Creation Complete Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 processing appconfig overrides Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 processing appconfig overrides Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.016616 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.013040 dbus-daemon[1926]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1970 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 10 23:35:20.019804 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 processing appconfig overrides Jul 10 23:35:20.019804 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO Proxy environment variables: Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024335279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024445415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024485255Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024798623Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024837179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024959855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.024988307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.025338299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.025421639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.025455455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:20.026773 containerd[1952]: time="2025-07-10T23:35:20.025479647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.027288 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.027288 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 23:35:20.027288 amazon-ssm-agent[2096]: 2025/07/10 23:35:20 processing appconfig overrides Jul 10 23:35:20.034557 containerd[1952]: time="2025-07-10T23:35:20.025681019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.034557 containerd[1952]: time="2025-07-10T23:35:20.026155547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:20.029846 systemd[1]: Starting polkit.service - Authorization Manager... Jul 10 23:35:20.042202 containerd[1952]: time="2025-07-10T23:35:20.035680175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:20.042202 containerd[1952]: time="2025-07-10T23:35:20.035742491Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 23:35:20.042202 containerd[1952]: time="2025-07-10T23:35:20.035985011Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 23:35:20.042202 containerd[1952]: time="2025-07-10T23:35:20.036085163Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069190259Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069303695Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069340811Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069405803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069442307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.069719087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070147631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070332227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070389371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070424615Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070455527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070489535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070520963Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.073380 containerd[1952]: time="2025-07-10T23:35:20.070552031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070585655Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070617059Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070646159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070673867Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070712159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070744655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070774871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070805039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070833251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070866839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070894715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070925315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070954643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074022 containerd[1952]: time="2025-07-10T23:35:20.070989143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071017367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071044967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071074691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071114015Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071157551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071188031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.074678 containerd[1952]: time="2025-07-10T23:35:20.071216843Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.078414971Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080118587Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080154587Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080216675Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080241287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080310431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080381279Z" level=info msg="NRI interface is disabled by configuration." Jul 10 23:35:20.083911 containerd[1952]: time="2025-07-10T23:35:20.080465639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 23:35:20.088166 containerd[1952]: time="2025-07-10T23:35:20.086009435Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 23:35:20.088166 containerd[1952]: time="2025-07-10T23:35:20.086178431Z" level=info msg="Connect containerd service" Jul 10 23:35:20.088166 containerd[1952]: time="2025-07-10T23:35:20.086282747Z" level=info msg="using legacy CRI server" Jul 10 23:35:20.088166 containerd[1952]: time="2025-07-10T23:35:20.086302859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:35:20.088166 containerd[1952]: time="2025-07-10T23:35:20.087469127Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 23:35:20.098311 containerd[1952]: time="2025-07-10T23:35:20.098240195Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098723627Z" level=info msg="Start subscribing containerd event" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098807867Z" level=info msg="Start recovering state" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098930483Z" level=info msg="Start event monitor" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098953751Z" level=info msg="Start snapshots syncer" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098977139Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.098996903Z" level=info msg="Start streaming server" Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.099653063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.099745895Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:35:20.106402 containerd[1952]: time="2025-07-10T23:35:20.101494271Z" level=info msg="containerd successfully booted in 0.407774s" Jul 10 23:35:20.101627 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:35:20.114636 polkitd[2138]: Started polkitd version 121 Jul 10 23:35:20.122401 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO http_proxy: Jul 10 23:35:20.176663 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Jul 10 23:35:20.176787 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 10 23:35:20.177999 polkitd[2138]: Finished loading, compiling and executing 2 rules Jul 10 23:35:20.181069 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 10 23:35:20.181945 systemd[1]: Started polkit.service - Authorization Manager. Jul 10 23:35:20.185310 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 10 23:35:20.219897 systemd-hostnamed[1970]: Hostname set to (transient) Jul 10 23:35:20.221078 systemd-resolved[1869]: System hostname changed to 'ip-172-31-24-228'. Jul 10 23:35:20.229760 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO no_proxy: Jul 10 23:35:20.333560 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO https_proxy: Jul 10 23:35:20.431776 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO Checking if agent identity type OnPrem can be assumed Jul 10 23:35:20.530787 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO Checking if agent identity type EC2 can be assumed Jul 10 23:35:20.631972 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO Agent will take identity from EC2 Jul 10 23:35:20.731996 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 10 23:35:20.832469 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 10 23:35:20.925631 tar[1943]: linux-arm64/README.md Jul 10 23:35:20.931763 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 10 23:35:20.955922 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:35:21.030993 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 10 23:35:21.132385 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 10 23:35:21.211941 sshd_keygen[1974]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:35:21.232495 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] Starting Core Agent Jul 10 23:35:21.275985 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:35:21.290922 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:35:21.305953 systemd[1]: Started sshd@0-172.31.24.228:22-147.75.109.163:49964.service - OpenSSH per-connection server daemon (147.75.109.163:49964). Jul 10 23:35:21.333596 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 10 23:35:21.336455 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:35:21.336917 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:35:21.353279 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:35:21.402189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:35:21.415970 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:35:21.430021 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 23:35:21.433014 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:35:21.439217 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [Registrar] Starting registrar module Jul 10 23:35:21.541053 amazon-ssm-agent[2096]: 2025-07-10 23:35:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 10 23:35:21.580939 sshd[2164]: Accepted publickey for core from 147.75.109.163 port 49964 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:21.584700 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:21.600911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:35:21.611935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:35:21.634458 amazon-ssm-agent[2096]: 2025-07-10 23:35:21 INFO [EC2Identity] EC2 registration was successful. Jul 10 23:35:21.634458 amazon-ssm-agent[2096]: 2025-07-10 23:35:21 INFO [CredentialRefresher] credentialRefresher has started Jul 10 23:35:21.634458 amazon-ssm-agent[2096]: 2025-07-10 23:35:21 INFO [CredentialRefresher] Starting credentials refresher loop Jul 10 23:35:21.634458 amazon-ssm-agent[2096]: 2025-07-10 23:35:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 10 23:35:21.640580 systemd-logind[1938]: New session 1 of user core. Jul 10 23:35:21.642377 amazon-ssm-agent[2096]: 2025-07-10 23:35:21 INFO [CredentialRefresher] Next credential rotation will be in 32.11665809556666 minutes Jul 10 23:35:21.657423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:35:21.674906 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:35:21.691193 (systemd)[2175]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:35:21.696269 systemd-logind[1938]: New session c1 of user core. Jul 10 23:35:21.874954 ntpd[1930]: Listen normally on 6 eth0 [fe80::4bd:deff:fef3:8d0f%2]:123 Jul 10 23:35:21.876061 ntpd[1930]: 10 Jul 23:35:21 ntpd[1930]: Listen normally on 6 eth0 [fe80::4bd:deff:fef3:8d0f%2]:123 Jul 10 23:35:21.987951 systemd[2175]: Queued start job for default target default.target. Jul 10 23:35:21.998780 systemd[2175]: Created slice app.slice - User Application Slice. Jul 10 23:35:21.998846 systemd[2175]: Reached target paths.target - Paths. Jul 10 23:35:21.998936 systemd[2175]: Reached target timers.target - Timers. Jul 10 23:35:22.003610 systemd[2175]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:35:22.031642 systemd[2175]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:35:22.031887 systemd[2175]: Reached target sockets.target - Sockets. Jul 10 23:35:22.031979 systemd[2175]: Reached target basic.target - Basic System. Jul 10 23:35:22.032064 systemd[2175]: Reached target default.target - Main User Target. Jul 10 23:35:22.032124 systemd[2175]: Startup finished in 323ms. Jul 10 23:35:22.032458 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:35:22.047654 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:35:22.207924 systemd[1]: Started sshd@1-172.31.24.228:22-147.75.109.163:49978.service - OpenSSH per-connection server daemon (147.75.109.163:49978). Jul 10 23:35:22.409115 sshd[2186]: Accepted publickey for core from 147.75.109.163 port 49978 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:22.411509 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:22.420601 systemd-logind[1938]: New session 2 of user core. Jul 10 23:35:22.426616 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:35:22.553955 sshd[2188]: Connection closed by 147.75.109.163 port 49978 Jul 10 23:35:22.554823 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:22.563150 systemd[1]: sshd@1-172.31.24.228:22-147.75.109.163:49978.service: Deactivated successfully. Jul 10 23:35:22.567071 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 23:35:22.569184 systemd-logind[1938]: Session 2 logged out. Waiting for processes to exit. Jul 10 23:35:22.571450 systemd-logind[1938]: Removed session 2. Jul 10 23:35:22.588301 systemd[1]: Started sshd@2-172.31.24.228:22-147.75.109.163:49984.service - OpenSSH per-connection server daemon (147.75.109.163:49984). Jul 10 23:35:22.663265 amazon-ssm-agent[2096]: 2025-07-10 23:35:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 10 23:35:22.746787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:22.751909 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:35:22.754652 systemd[1]: Startup finished in 1.108s (kernel) + 9.848s (initrd) + 10.161s (userspace) = 21.118s. Jul 10 23:35:22.765479 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:22.769943 amazon-ssm-agent[2096]: 2025-07-10 23:35:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2197) started Jul 10 23:35:22.801069 sshd[2194]: Accepted publickey for core from 147.75.109.163 port 49984 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:22.807615 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:22.825462 systemd-logind[1938]: New session 3 of user core. Jul 10 23:35:22.835715 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:35:22.871282 amazon-ssm-agent[2096]: 2025-07-10 23:35:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 10 23:35:22.967842 sshd[2213]: Connection closed by 147.75.109.163 port 49984 Jul 10 23:35:22.968453 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:22.976005 systemd[1]: sshd@2-172.31.24.228:22-147.75.109.163:49984.service: Deactivated successfully. Jul 10 23:35:22.980297 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 23:35:22.981952 systemd-logind[1938]: Session 3 logged out. Waiting for processes to exit. Jul 10 23:35:22.984535 systemd-logind[1938]: Removed session 3. Jul 10 23:35:24.099369 kubelet[2206]: E0710 23:35:24.099275 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:24.103874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:24.104246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:24.104915 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 259.4M memory peak. Jul 10 23:35:26.242574 systemd-resolved[1869]: Clock change detected. Flushing caches. Jul 10 23:35:33.382199 systemd[1]: Started sshd@3-172.31.24.228:22-147.75.109.163:56176.service - OpenSSH per-connection server daemon (147.75.109.163:56176). Jul 10 23:35:33.562476 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 56176 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:33.564900 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:33.573786 systemd-logind[1938]: New session 4 of user core. Jul 10 23:35:33.583042 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:35:33.707714 sshd[2232]: Connection closed by 147.75.109.163 port 56176 Jul 10 23:35:33.707580 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:33.713882 systemd[1]: sshd@3-172.31.24.228:22-147.75.109.163:56176.service: Deactivated successfully. Jul 10 23:35:33.717523 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:35:33.719199 systemd-logind[1938]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:35:33.720985 systemd-logind[1938]: Removed session 4. Jul 10 23:35:33.749250 systemd[1]: Started sshd@4-172.31.24.228:22-147.75.109.163:56188.service - OpenSSH per-connection server daemon (147.75.109.163:56188). Jul 10 23:35:33.929363 sshd[2238]: Accepted publickey for core from 147.75.109.163 port 56188 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:33.931819 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:33.939692 systemd-logind[1938]: New session 5 of user core. Jul 10 23:35:33.948972 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:35:34.065725 sshd[2240]: Connection closed by 147.75.109.163 port 56188 Jul 10 23:35:34.066980 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:34.073133 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:35:34.074535 systemd[1]: sshd@4-172.31.24.228:22-147.75.109.163:56188.service: Deactivated successfully. Jul 10 23:35:34.077619 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:35:34.079856 systemd-logind[1938]: Removed session 5. Jul 10 23:35:34.116398 systemd[1]: Started sshd@5-172.31.24.228:22-147.75.109.163:56190.service - OpenSSH per-connection server daemon (147.75.109.163:56190). Jul 10 23:35:34.296396 sshd[2246]: Accepted publickey for core from 147.75.109.163 port 56190 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:34.298846 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:34.307876 systemd-logind[1938]: New session 6 of user core. Jul 10 23:35:34.316000 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:35:34.440184 sshd[2248]: Connection closed by 147.75.109.163 port 56190 Jul 10 23:35:34.441018 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:34.447168 systemd[1]: sshd@5-172.31.24.228:22-147.75.109.163:56190.service: Deactivated successfully. Jul 10 23:35:34.450214 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:35:34.451466 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:35:34.453840 systemd-logind[1938]: Removed session 6. Jul 10 23:35:34.474857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:35:34.481142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:34.494154 systemd[1]: Started sshd@6-172.31.24.228:22-147.75.109.163:56200.service - OpenSSH per-connection server daemon (147.75.109.163:56200). Jul 10 23:35:34.675340 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 56200 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:34.678310 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:34.691406 systemd-logind[1938]: New session 7 of user core. Jul 10 23:35:34.700061 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:35:34.826084 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:35:34.827346 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:34.832110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:34.834968 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:34.849871 sudo[2262]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:34.873573 sshd[2259]: Connection closed by 147.75.109.163 port 56200 Jul 10 23:35:34.878086 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:34.886994 systemd[1]: sshd@6-172.31.24.228:22-147.75.109.163:56200.service: Deactivated successfully. Jul 10 23:35:34.891114 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:35:34.896100 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:35:34.920259 systemd[1]: Started sshd@7-172.31.24.228:22-147.75.109.163:56204.service - OpenSSH per-connection server daemon (147.75.109.163:56204). Jul 10 23:35:34.922522 systemd-logind[1938]: Removed session 7. Jul 10 23:35:34.925220 kubelet[2266]: E0710 23:35:34.923246 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:34.933293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:34.933609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:34.936889 systemd[1]: kubelet.service: Consumed 305ms CPU time, 106.6M memory peak. Jul 10 23:35:35.114533 sshd[2277]: Accepted publickey for core from 147.75.109.163 port 56204 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:35.117031 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:35.126252 systemd-logind[1938]: New session 8 of user core. Jul 10 23:35:35.136011 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:35:35.240315 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:35:35.242185 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:35.248512 sudo[2283]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:35.258420 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:35:35.259051 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:35.283562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:35:35.330795 augenrules[2305]: No rules Jul 10 23:35:35.333404 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:35:35.334881 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:35:35.337553 sudo[2282]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:35.360429 sshd[2281]: Connection closed by 147.75.109.163 port 56204 Jul 10 23:35:35.361305 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:35.368139 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:35:35.368146 systemd[1]: sshd@7-172.31.24.228:22-147.75.109.163:56204.service: Deactivated successfully. Jul 10 23:35:35.371678 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:35:35.373621 systemd-logind[1938]: Removed session 8. Jul 10 23:35:35.404269 systemd[1]: Started sshd@8-172.31.24.228:22-147.75.109.163:56212.service - OpenSSH per-connection server daemon (147.75.109.163:56212). Jul 10 23:35:35.596241 sshd[2314]: Accepted publickey for core from 147.75.109.163 port 56212 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:35:35.598432 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:35.607405 systemd-logind[1938]: New session 9 of user core. Jul 10 23:35:35.614989 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:35:35.718312 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:35:35.719506 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:36.267341 (dockerd)[2334]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:35:36.267458 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:35:36.666473 dockerd[2334]: time="2025-07-10T23:35:36.665476619Z" level=info msg="Starting up" Jul 10 23:35:36.790834 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2220888417-merged.mount: Deactivated successfully. Jul 10 23:35:36.800053 systemd[1]: var-lib-docker-metacopy\x2dcheck2591676262-merged.mount: Deactivated successfully. Jul 10 23:35:36.825341 dockerd[2334]: time="2025-07-10T23:35:36.825262176Z" level=info msg="Loading containers: start." Jul 10 23:35:37.077811 kernel: Initializing XFRM netlink socket Jul 10 23:35:37.110403 (udev-worker)[2357]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:35:37.209674 systemd-networkd[1868]: docker0: Link UP Jul 10 23:35:37.257054 dockerd[2334]: time="2025-07-10T23:35:37.257002678Z" level=info msg="Loading containers: done." Jul 10 23:35:37.288160 dockerd[2334]: time="2025-07-10T23:35:37.288079678Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:35:37.288386 dockerd[2334]: time="2025-07-10T23:35:37.288221422Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 10 23:35:37.288598 dockerd[2334]: time="2025-07-10T23:35:37.288544486Z" level=info msg="Daemon has completed initialization" Jul 10 23:35:37.355766 dockerd[2334]: time="2025-07-10T23:35:37.354725867Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:35:37.354884 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:35:38.641710 containerd[1952]: time="2025-07-10T23:35:38.641615953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 23:35:39.279558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269722720.mount: Deactivated successfully. Jul 10 23:35:40.651801 containerd[1952]: time="2025-07-10T23:35:40.651190467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:40.654547 containerd[1952]: time="2025-07-10T23:35:40.654476655Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 10 23:35:40.657146 containerd[1952]: time="2025-07-10T23:35:40.657068403Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:40.664841 containerd[1952]: time="2025-07-10T23:35:40.664751583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:40.668435 containerd[1952]: time="2025-07-10T23:35:40.668171187Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.02649869s" Jul 10 23:35:40.668435 containerd[1952]: time="2025-07-10T23:35:40.668236119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 10 23:35:40.669786 containerd[1952]: time="2025-07-10T23:35:40.669676707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 23:35:42.094886 containerd[1952]: time="2025-07-10T23:35:42.094827314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:42.097121 containerd[1952]: time="2025-07-10T23:35:42.097034174Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 10 23:35:42.098250 containerd[1952]: time="2025-07-10T23:35:42.098183498Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:42.103862 containerd[1952]: time="2025-07-10T23:35:42.103723778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:42.106130 containerd[1952]: time="2025-07-10T23:35:42.106074242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.436343451s" Jul 10 23:35:42.107529 containerd[1952]: time="2025-07-10T23:35:42.106368962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 10 23:35:42.108200 containerd[1952]: time="2025-07-10T23:35:42.108155498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 23:35:43.271257 containerd[1952]: time="2025-07-10T23:35:43.271190512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:43.273249 containerd[1952]: time="2025-07-10T23:35:43.273182920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 10 23:35:43.273725 containerd[1952]: time="2025-07-10T23:35:43.273675904Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:43.280246 containerd[1952]: time="2025-07-10T23:35:43.279158452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:43.281783 containerd[1952]: time="2025-07-10T23:35:43.281682376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.173466698s" Jul 10 23:35:43.281949 containerd[1952]: time="2025-07-10T23:35:43.281795596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 10 23:35:43.282595 containerd[1952]: time="2025-07-10T23:35:43.282505048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 23:35:44.492486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352190958.mount: Deactivated successfully. Jul 10 23:35:45.006279 containerd[1952]: time="2025-07-10T23:35:45.006221465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.008139 containerd[1952]: time="2025-07-10T23:35:45.007839617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 10 23:35:45.008770 containerd[1952]: time="2025-07-10T23:35:45.008473001Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.012141 containerd[1952]: time="2025-07-10T23:35:45.012056621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.013701 containerd[1952]: time="2025-07-10T23:35:45.013517225Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.729719693s" Jul 10 23:35:45.013701 containerd[1952]: time="2025-07-10T23:35:45.013567013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 23:35:45.014478 containerd[1952]: time="2025-07-10T23:35:45.014413817Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 23:35:45.159849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:35:45.167332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:45.485661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:45.496330 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:45.593352 kubelet[2603]: E0710 23:35:45.593152 2603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:45.598495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:45.600348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:45.601414 systemd[1]: kubelet.service: Consumed 293ms CPU time, 105.4M memory peak. Jul 10 23:35:45.608365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330449798.mount: Deactivated successfully. Jul 10 23:35:46.941805 containerd[1952]: time="2025-07-10T23:35:46.941331454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:46.943578 containerd[1952]: time="2025-07-10T23:35:46.943476754Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 10 23:35:46.946215 containerd[1952]: time="2025-07-10T23:35:46.946139746Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:46.952537 containerd[1952]: time="2025-07-10T23:35:46.952436098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:46.955326 containerd[1952]: time="2025-07-10T23:35:46.955105834Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.940499225s" Jul 10 23:35:46.955326 containerd[1952]: time="2025-07-10T23:35:46.955164922Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 23:35:46.956356 containerd[1952]: time="2025-07-10T23:35:46.956049454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:35:47.448469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091499130.mount: Deactivated successfully. Jul 10 23:35:47.462800 containerd[1952]: time="2025-07-10T23:35:47.462484821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:47.464492 containerd[1952]: time="2025-07-10T23:35:47.464404761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 10 23:35:47.467004 containerd[1952]: time="2025-07-10T23:35:47.466960569Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:47.472877 containerd[1952]: time="2025-07-10T23:35:47.472812513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:47.475304 containerd[1952]: time="2025-07-10T23:35:47.475229913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.129567ms" Jul 10 23:35:47.475447 containerd[1952]: time="2025-07-10T23:35:47.475349193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:35:47.476147 containerd[1952]: time="2025-07-10T23:35:47.476087613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 23:35:48.043222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910679073.mount: Deactivated successfully. Jul 10 23:35:50.170474 containerd[1952]: time="2025-07-10T23:35:50.170396554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:50.172773 containerd[1952]: time="2025-07-10T23:35:50.172660234Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 10 23:35:50.175955 containerd[1952]: time="2025-07-10T23:35:50.175889878Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:50.184755 containerd[1952]: time="2025-07-10T23:35:50.182690374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:50.185541 containerd[1952]: time="2025-07-10T23:35:50.185494690Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.709349225s" Jul 10 23:35:50.185685 containerd[1952]: time="2025-07-10T23:35:50.185656030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 10 23:35:50.622903 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 10 23:35:55.660888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 23:35:55.668185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:56.018178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:56.026384 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:56.131289 kubelet[2751]: E0710 23:35:56.131214 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:56.136803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:56.137370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:56.138351 systemd[1]: kubelet.service: Consumed 281ms CPU time, 109.6M memory peak. Jul 10 23:35:59.195266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:59.195688 systemd[1]: kubelet.service: Consumed 281ms CPU time, 109.6M memory peak. Jul 10 23:35:59.203278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:59.258537 systemd[1]: Reload requested from client PID 2766 ('systemctl') (unit session-9.scope)... Jul 10 23:35:59.258818 systemd[1]: Reloading... Jul 10 23:35:59.544778 zram_generator::config[2814]: No configuration found. Jul 10 23:35:59.774753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:36:00.005704 systemd[1]: Reloading finished in 746 ms. Jul 10 23:36:00.090926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:00.103044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:00.106262 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:36:00.108149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:00.108245 systemd[1]: kubelet.service: Consumed 232ms CPU time, 94.9M memory peak. Jul 10 23:36:00.120275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:00.432024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:00.454370 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:36:00.547876 kubelet[2876]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:00.547876 kubelet[2876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:36:00.547876 kubelet[2876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:00.548469 kubelet[2876]: I0710 23:36:00.548005 2876 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:36:02.635390 kubelet[2876]: I0710 23:36:02.635314 2876 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 23:36:02.635390 kubelet[2876]: I0710 23:36:02.635370 2876 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:36:02.636072 kubelet[2876]: I0710 23:36:02.635894 2876 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 23:36:02.679611 kubelet[2876]: E0710 23:36:02.679556 2876 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:02.683007 kubelet[2876]: I0710 23:36:02.682571 2876 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:36:02.694786 kubelet[2876]: E0710 23:36:02.694474 2876 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:36:02.694786 kubelet[2876]: I0710 23:36:02.694545 2876 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:36:02.700426 kubelet[2876]: I0710 23:36:02.700373 2876 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:36:02.702609 kubelet[2876]: I0710 23:36:02.702494 2876 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:36:02.702961 kubelet[2876]: I0710 23:36:02.702586 2876 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:36:02.703195 kubelet[2876]: I0710 23:36:02.703103 2876 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:36:02.703195 kubelet[2876]: I0710 23:36:02.703136 2876 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 23:36:02.703600 kubelet[2876]: I0710 23:36:02.703529 2876 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:02.710042 kubelet[2876]: I0710 23:36:02.709972 2876 kubelet.go:446] "Attempting to sync node with API server" Jul 10 23:36:02.710724 kubelet[2876]: I0710 23:36:02.710522 2876 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:36:02.710724 kubelet[2876]: I0710 23:36:02.710572 2876 kubelet.go:352] "Adding apiserver pod source" Jul 10 23:36:02.710724 kubelet[2876]: I0710 23:36:02.710595 2876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:36:02.713496 kubelet[2876]: W0710 23:36:02.713448 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-228&limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:02.713769 kubelet[2876]: E0710 23:36:02.713693 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-228&limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:02.715204 kubelet[2876]: W0710 23:36:02.715148 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:02.715318 kubelet[2876]: E0710 23:36:02.715216 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:02.715936 kubelet[2876]: I0710 23:36:02.715895 2876 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:36:02.718282 kubelet[2876]: I0710 23:36:02.718230 2876 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:36:02.718513 kubelet[2876]: W0710 23:36:02.718478 2876 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:36:02.722950 kubelet[2876]: I0710 23:36:02.722897 2876 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:36:02.722950 kubelet[2876]: I0710 23:36:02.722960 2876 server.go:1287] "Started kubelet" Jul 10 23:36:02.732841 kubelet[2876]: I0710 23:36:02.732786 2876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:36:02.734958 kubelet[2876]: E0710 23:36:02.734475 2876 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-228.1851080885e45f59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-228,UID:ip-172-31-24-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-228,},FirstTimestamp:2025-07-10 23:36:02.722930521 +0000 UTC m=+2.257379293,LastTimestamp:2025-07-10 23:36:02.722930521 +0000 UTC m=+2.257379293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-228,}" Jul 10 23:36:02.741077 kubelet[2876]: I0710 23:36:02.741010 2876 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:36:02.744773 kubelet[2876]: I0710 23:36:02.742707 2876 server.go:479] "Adding debug handlers to kubelet server" Jul 10 23:36:02.745158 kubelet[2876]: I0710 23:36:02.745086 2876 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:36:02.745594 kubelet[2876]: I0710 23:36:02.745566 2876 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:36:02.746106 kubelet[2876]: I0710 23:36:02.746071 2876 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:36:02.747273 kubelet[2876]: I0710 23:36:02.747240 2876 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:36:02.749490 kubelet[2876]: I0710 23:36:02.749458 2876 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:36:02.749892 kubelet[2876]: I0710 23:36:02.749864 2876 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:36:02.750358 kubelet[2876]: E0710 23:36:02.750294 2876 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-228\" not found" Jul 10 23:36:02.751118 kubelet[2876]: I0710 23:36:02.751081 2876 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:36:02.751449 kubelet[2876]: I0710 23:36:02.751396 2876 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:36:02.752968 kubelet[2876]: E0710 23:36:02.752910 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-228?timeout=10s\": dial tcp 172.31.24.228:6443: connect: connection refused" interval="200ms" Jul 10 23:36:02.753329 kubelet[2876]: W0710 23:36:02.753259 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:02.753837 kubelet[2876]: E0710 23:36:02.753756 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:02.753917 kubelet[2876]: E0710 23:36:02.753845 2876 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:36:02.756077 kubelet[2876]: I0710 23:36:02.755919 2876 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:36:02.797359 kubelet[2876]: I0710 23:36:02.796946 2876 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:36:02.797359 kubelet[2876]: I0710 23:36:02.796993 2876 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:36:02.797359 kubelet[2876]: I0710 23:36:02.797024 2876 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:02.797810 kubelet[2876]: I0710 23:36:02.797687 2876 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:36:02.800783 kubelet[2876]: I0710 23:36:02.800321 2876 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:36:02.800783 kubelet[2876]: I0710 23:36:02.800366 2876 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 23:36:02.800783 kubelet[2876]: I0710 23:36:02.800398 2876 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:36:02.800783 kubelet[2876]: I0710 23:36:02.800412 2876 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 23:36:02.800783 kubelet[2876]: E0710 23:36:02.800482 2876 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:36:02.804297 kubelet[2876]: W0710 23:36:02.803780 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:02.804859 kubelet[2876]: E0710 23:36:02.804637 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:02.805787 kubelet[2876]: I0710 23:36:02.805473 2876 policy_none.go:49] "None policy: Start" Jul 10 23:36:02.805787 kubelet[2876]: I0710 23:36:02.805520 2876 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:36:02.805787 kubelet[2876]: I0710 23:36:02.805560 2876 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:36:02.819002 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:36:02.840279 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:36:02.848141 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:36:02.851199 kubelet[2876]: E0710 23:36:02.851160 2876 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-228\" not found" Jul 10 23:36:02.859954 kubelet[2876]: E0710 23:36:02.859779 2876 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-228.1851080885e45f59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-228,UID:ip-172-31-24-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-228,},FirstTimestamp:2025-07-10 23:36:02.722930521 +0000 UTC m=+2.257379293,LastTimestamp:2025-07-10 23:36:02.722930521 +0000 UTC m=+2.257379293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-228,}" Jul 10 23:36:02.863239 kubelet[2876]: I0710 23:36:02.862445 2876 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:36:02.863239 kubelet[2876]: I0710 23:36:02.862754 2876 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:36:02.863239 kubelet[2876]: I0710 23:36:02.862774 2876 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:36:02.863239 kubelet[2876]: I0710 23:36:02.863094 2876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:36:02.866241 kubelet[2876]: E0710 23:36:02.866038 2876 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:36:02.866241 kubelet[2876]: E0710 23:36:02.866133 2876 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-228\" not found" Jul 10 23:36:02.922360 systemd[1]: Created slice kubepods-burstable-podfd3037350825581412b3f3e908b386ac.slice - libcontainer container kubepods-burstable-podfd3037350825581412b3f3e908b386ac.slice. Jul 10 23:36:02.943198 kubelet[2876]: E0710 23:36:02.943050 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:02.952036 kubelet[2876]: I0710 23:36:02.951830 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:02.952036 kubelet[2876]: I0710 23:36:02.951890 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:02.952036 kubelet[2876]: I0710 23:36:02.951934 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-ca-certs\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:02.952036 kubelet[2876]: I0710 23:36:02.951972 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:02.952036 kubelet[2876]: I0710 23:36:02.952013 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:02.952493 kubelet[2876]: I0710 23:36:02.952050 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:02.952493 kubelet[2876]: I0710 23:36:02.952085 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:02.952493 kubelet[2876]: I0710 23:36:02.952120 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:02.952493 kubelet[2876]: I0710 23:36:02.952158 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12120d0681db33b0f5e792e3305f2dd9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-228\" (UID: \"12120d0681db33b0f5e792e3305f2dd9\") " pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:02.952386 systemd[1]: Created slice kubepods-burstable-podee701c37be1313f0e9c347f7b79cfc83.slice - libcontainer container kubepods-burstable-podee701c37be1313f0e9c347f7b79cfc83.slice. Jul 10 23:36:02.955394 kubelet[2876]: E0710 23:36:02.954700 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-228?timeout=10s\": dial tcp 172.31.24.228:6443: connect: connection refused" interval="400ms" Jul 10 23:36:02.956695 kubelet[2876]: E0710 23:36:02.956642 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:02.961137 systemd[1]: Created slice kubepods-burstable-pod12120d0681db33b0f5e792e3305f2dd9.slice - libcontainer container kubepods-burstable-pod12120d0681db33b0f5e792e3305f2dd9.slice. Jul 10 23:36:02.965424 kubelet[2876]: I0710 23:36:02.965362 2876 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:02.966092 kubelet[2876]: E0710 23:36:02.966058 2876 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.228:6443/api/v1/nodes\": dial tcp 172.31.24.228:6443: connect: connection refused" node="ip-172-31-24-228" Jul 10 23:36:02.967683 kubelet[2876]: E0710 23:36:02.967602 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:03.169195 kubelet[2876]: I0710 23:36:03.169071 2876 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:03.170029 kubelet[2876]: E0710 23:36:03.169974 2876 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.228:6443/api/v1/nodes\": dial tcp 172.31.24.228:6443: connect: connection refused" node="ip-172-31-24-228" Jul 10 23:36:03.246091 containerd[1952]: time="2025-07-10T23:36:03.245200931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-228,Uid:fd3037350825581412b3f3e908b386ac,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:03.258628 containerd[1952]: time="2025-07-10T23:36:03.258533387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-228,Uid:ee701c37be1313f0e9c347f7b79cfc83,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:03.269598 containerd[1952]: time="2025-07-10T23:36:03.269406539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-228,Uid:12120d0681db33b0f5e792e3305f2dd9,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:03.355718 kubelet[2876]: E0710 23:36:03.355620 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-228?timeout=10s\": dial tcp 172.31.24.228:6443: connect: connection refused" interval="800ms" Jul 10 23:36:03.573374 kubelet[2876]: I0710 23:36:03.573201 2876 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:03.574235 kubelet[2876]: E0710 23:36:03.574149 2876 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.228:6443/api/v1/nodes\": dial tcp 172.31.24.228:6443: connect: connection refused" node="ip-172-31-24-228" Jul 10 23:36:03.739955 kubelet[2876]: W0710 23:36:03.739832 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:03.740535 kubelet[2876]: E0710 23:36:03.739982 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:03.763057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389525409.mount: Deactivated successfully. Jul 10 23:36:03.776999 containerd[1952]: time="2025-07-10T23:36:03.776917970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:03.781355 containerd[1952]: time="2025-07-10T23:36:03.781268078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 10 23:36:03.788272 containerd[1952]: time="2025-07-10T23:36:03.788006882Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:03.794964 containerd[1952]: time="2025-07-10T23:36:03.794889146Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:03.797829 containerd[1952]: time="2025-07-10T23:36:03.796956746Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:03.800039 containerd[1952]: time="2025-07-10T23:36:03.799816022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:03.801475 containerd[1952]: time="2025-07-10T23:36:03.801264878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:36:03.801989 containerd[1952]: time="2025-07-10T23:36:03.801909350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.571823ms" Jul 10 23:36:03.804023 containerd[1952]: time="2025-07-10T23:36:03.803779034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:36:03.815774 containerd[1952]: time="2025-07-10T23:36:03.815559410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.904271ms" Jul 10 23:36:03.817215 containerd[1952]: time="2025-07-10T23:36:03.817174934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.660263ms" Jul 10 23:36:03.970644 kubelet[2876]: W0710 23:36:03.970555 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:03.970947 kubelet[2876]: E0710 23:36:03.970914 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:04.025899 containerd[1952]: time="2025-07-10T23:36:04.021597731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:04.025899 containerd[1952]: time="2025-07-10T23:36:04.025845671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:04.025899 containerd[1952]: time="2025-07-10T23:36:04.025895159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.026319 containerd[1952]: time="2025-07-10T23:36:04.026087399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.030654 containerd[1952]: time="2025-07-10T23:36:04.030502271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:04.032405 containerd[1952]: time="2025-07-10T23:36:04.032212247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:04.032898 containerd[1952]: time="2025-07-10T23:36:04.032359715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.038761 containerd[1952]: time="2025-07-10T23:36:04.035900327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.040610 containerd[1952]: time="2025-07-10T23:36:04.040467251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:04.040827 containerd[1952]: time="2025-07-10T23:36:04.040584839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:04.040827 containerd[1952]: time="2025-07-10T23:36:04.040629863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.041014 containerd[1952]: time="2025-07-10T23:36:04.040814267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:04.064449 kubelet[2876]: W0710 23:36:04.064401 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:04.064655 kubelet[2876]: E0710 23:36:04.064623 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:04.080060 systemd[1]: Started cri-containerd-071eb3de118e7b292b174de6d9c699edbcc3a895843bb4fa555a55bf64a7811d.scope - libcontainer container 071eb3de118e7b292b174de6d9c699edbcc3a895843bb4fa555a55bf64a7811d. Jul 10 23:36:04.101100 systemd[1]: Started cri-containerd-3f9d33b13c9f54c23140de368f3d4847f8943d8f62e872d12e07c11158ea2dbe.scope - libcontainer container 3f9d33b13c9f54c23140de368f3d4847f8943d8f62e872d12e07c11158ea2dbe. Jul 10 23:36:04.114688 systemd[1]: Started cri-containerd-5aec129f574898add2d413798c2702b379a727e6f94944fb58d4e7ef20e526fb.scope - libcontainer container 5aec129f574898add2d413798c2702b379a727e6f94944fb58d4e7ef20e526fb. Jul 10 23:36:04.157407 kubelet[2876]: E0710 23:36:04.157330 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-228?timeout=10s\": dial tcp 172.31.24.228:6443: connect: connection refused" interval="1.6s" Jul 10 23:36:04.187347 containerd[1952]: time="2025-07-10T23:36:04.187280508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-228,Uid:12120d0681db33b0f5e792e3305f2dd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"071eb3de118e7b292b174de6d9c699edbcc3a895843bb4fa555a55bf64a7811d\"" Jul 10 23:36:04.200143 containerd[1952]: time="2025-07-10T23:36:04.199090572Z" level=info msg="CreateContainer within sandbox \"071eb3de118e7b292b174de6d9c699edbcc3a895843bb4fa555a55bf64a7811d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:36:04.225168 containerd[1952]: time="2025-07-10T23:36:04.225000552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-228,Uid:fd3037350825581412b3f3e908b386ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f9d33b13c9f54c23140de368f3d4847f8943d8f62e872d12e07c11158ea2dbe\"" Jul 10 23:36:04.236993 containerd[1952]: time="2025-07-10T23:36:04.236927652Z" level=info msg="CreateContainer within sandbox \"3f9d33b13c9f54c23140de368f3d4847f8943d8f62e872d12e07c11158ea2dbe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:36:04.254494 containerd[1952]: time="2025-07-10T23:36:04.254361072Z" level=info msg="CreateContainer within sandbox \"071eb3de118e7b292b174de6d9c699edbcc3a895843bb4fa555a55bf64a7811d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38ffbc3999381f4e1b090b270c7b44c025e4912ec596579c05853b9e6870165b\"" Jul 10 23:36:04.256370 containerd[1952]: time="2025-07-10T23:36:04.256325220Z" level=info msg="StartContainer for \"38ffbc3999381f4e1b090b270c7b44c025e4912ec596579c05853b9e6870165b\"" Jul 10 23:36:04.257431 containerd[1952]: time="2025-07-10T23:36:04.257345292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-228,Uid:ee701c37be1313f0e9c347f7b79cfc83,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aec129f574898add2d413798c2702b379a727e6f94944fb58d4e7ef20e526fb\"" Jul 10 23:36:04.264225 containerd[1952]: time="2025-07-10T23:36:04.264152916Z" level=info msg="CreateContainer within sandbox \"5aec129f574898add2d413798c2702b379a727e6f94944fb58d4e7ef20e526fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:36:04.267059 kubelet[2876]: W0710 23:36:04.266678 2876 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-228&limit=500&resourceVersion=0": dial tcp 172.31.24.228:6443: connect: connection refused Jul 10 23:36:04.267326 kubelet[2876]: E0710 23:36:04.267210 2876 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-228&limit=500&resourceVersion=0\": dial tcp 172.31.24.228:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:36:04.296439 containerd[1952]: time="2025-07-10T23:36:04.296379541Z" level=info msg="CreateContainer within sandbox \"3f9d33b13c9f54c23140de368f3d4847f8943d8f62e872d12e07c11158ea2dbe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdd72c39ee2fb25420fb2f27acb7f3e4066bb92406403e4ce527c14dcc1b1c4e\"" Jul 10 23:36:04.302784 containerd[1952]: time="2025-07-10T23:36:04.301272517Z" level=info msg="StartContainer for \"cdd72c39ee2fb25420fb2f27acb7f3e4066bb92406403e4ce527c14dcc1b1c4e\"" Jul 10 23:36:04.316035 systemd[1]: Started cri-containerd-38ffbc3999381f4e1b090b270c7b44c025e4912ec596579c05853b9e6870165b.scope - libcontainer container 38ffbc3999381f4e1b090b270c7b44c025e4912ec596579c05853b9e6870165b. Jul 10 23:36:04.324411 containerd[1952]: time="2025-07-10T23:36:04.324226093Z" level=info msg="CreateContainer within sandbox \"5aec129f574898add2d413798c2702b379a727e6f94944fb58d4e7ef20e526fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b939973a87f06ddca5e60a595d2fa129a58c52b1cee9e86d5f9555315d3536c4\"" Jul 10 23:36:04.331246 containerd[1952]: time="2025-07-10T23:36:04.331180105Z" level=info msg="StartContainer for \"b939973a87f06ddca5e60a595d2fa129a58c52b1cee9e86d5f9555315d3536c4\"" Jul 10 23:36:04.384459 kubelet[2876]: I0710 23:36:04.384399 2876 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:04.385635 kubelet[2876]: E0710 23:36:04.385523 2876 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.228:6443/api/v1/nodes\": dial tcp 172.31.24.228:6443: connect: connection refused" node="ip-172-31-24-228" Jul 10 23:36:04.397314 systemd[1]: Started cri-containerd-cdd72c39ee2fb25420fb2f27acb7f3e4066bb92406403e4ce527c14dcc1b1c4e.scope - libcontainer container cdd72c39ee2fb25420fb2f27acb7f3e4066bb92406403e4ce527c14dcc1b1c4e. Jul 10 23:36:04.428097 systemd[1]: Started cri-containerd-b939973a87f06ddca5e60a595d2fa129a58c52b1cee9e86d5f9555315d3536c4.scope - libcontainer container b939973a87f06ddca5e60a595d2fa129a58c52b1cee9e86d5f9555315d3536c4. Jul 10 23:36:04.443086 containerd[1952]: time="2025-07-10T23:36:04.443014933Z" level=info msg="StartContainer for \"38ffbc3999381f4e1b090b270c7b44c025e4912ec596579c05853b9e6870165b\" returns successfully" Jul 10 23:36:04.526945 containerd[1952]: time="2025-07-10T23:36:04.526795022Z" level=info msg="StartContainer for \"cdd72c39ee2fb25420fb2f27acb7f3e4066bb92406403e4ce527c14dcc1b1c4e\" returns successfully" Jul 10 23:36:04.576452 containerd[1952]: time="2025-07-10T23:36:04.576383858Z" level=info msg="StartContainer for \"b939973a87f06ddca5e60a595d2fa129a58c52b1cee9e86d5f9555315d3536c4\" returns successfully" Jul 10 23:36:04.820784 kubelet[2876]: E0710 23:36:04.820220 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:04.825596 kubelet[2876]: E0710 23:36:04.824829 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:04.829775 kubelet[2876]: E0710 23:36:04.829697 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:05.005432 update_engine[1939]: I20250710 23:36:05.004489 1939 update_attempter.cc:509] Updating boot flags... Jul 10 23:36:05.139812 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3166) Jul 10 23:36:05.833774 kubelet[2876]: E0710 23:36:05.830698 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:05.833774 kubelet[2876]: E0710 23:36:05.831245 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:05.988138 kubelet[2876]: I0710 23:36:05.988099 2876 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:08.121148 kubelet[2876]: E0710 23:36:08.121072 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:08.278839 kubelet[2876]: E0710 23:36:08.276510 2876 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:08.299502 kubelet[2876]: E0710 23:36:08.299431 2876 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-228\" not found" node="ip-172-31-24-228" Jul 10 23:36:08.353028 kubelet[2876]: I0710 23:36:08.352790 2876 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-228" Jul 10 23:36:08.451627 kubelet[2876]: I0710 23:36:08.451166 2876 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:08.512765 kubelet[2876]: E0710 23:36:08.512022 2876 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:08.513032 kubelet[2876]: I0710 23:36:08.512990 2876 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:08.519633 kubelet[2876]: E0710 23:36:08.519282 2876 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:08.519633 kubelet[2876]: I0710 23:36:08.519340 2876 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:08.533118 kubelet[2876]: E0710 23:36:08.533051 2876 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:08.718698 kubelet[2876]: I0710 23:36:08.718261 2876 apiserver.go:52] "Watching apiserver" Jul 10 23:36:08.749978 kubelet[2876]: I0710 23:36:08.749915 2876 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:36:10.440216 systemd[1]: Reload requested from client PID 3253 ('systemctl') (unit session-9.scope)... Jul 10 23:36:10.440242 systemd[1]: Reloading... Jul 10 23:36:10.745784 zram_generator::config[3301]: No configuration found. Jul 10 23:36:11.037601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:36:11.299946 systemd[1]: Reloading finished in 858 ms. Jul 10 23:36:11.343720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:11.359879 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:36:11.360429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:11.360513 systemd[1]: kubelet.service: Consumed 2.985s CPU time, 129.8M memory peak. Jul 10 23:36:11.369302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:11.720057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:11.731378 (kubelet)[3358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:36:11.826785 kubelet[3358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:11.826785 kubelet[3358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:36:11.826785 kubelet[3358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:11.826785 kubelet[3358]: I0710 23:36:11.826410 3358 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:36:11.840655 kubelet[3358]: I0710 23:36:11.840611 3358 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 23:36:11.842540 kubelet[3358]: I0710 23:36:11.840868 3358 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:36:11.842540 kubelet[3358]: I0710 23:36:11.841405 3358 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 23:36:11.845250 kubelet[3358]: I0710 23:36:11.845213 3358 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 23:36:11.850568 kubelet[3358]: I0710 23:36:11.850526 3358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:36:11.866420 kubelet[3358]: E0710 23:36:11.866367 3358 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:36:11.866720 kubelet[3358]: I0710 23:36:11.866672 3358 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:36:11.873454 kubelet[3358]: I0710 23:36:11.873398 3358 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:36:11.873930 kubelet[3358]: I0710 23:36:11.873856 3358 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:36:11.874460 kubelet[3358]: I0710 23:36:11.873917 3358 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:36:11.874460 kubelet[3358]: I0710 23:36:11.874240 3358 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:36:11.874460 kubelet[3358]: I0710 23:36:11.874265 3358 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 23:36:11.874460 kubelet[3358]: I0710 23:36:11.874342 3358 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:11.874848 kubelet[3358]: I0710 23:36:11.874615 3358 kubelet.go:446] "Attempting to sync node with API server" Jul 10 23:36:11.876782 kubelet[3358]: I0710 23:36:11.874646 3358 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:36:11.876782 kubelet[3358]: I0710 23:36:11.875820 3358 kubelet.go:352] "Adding apiserver pod source" Jul 10 23:36:11.877835 kubelet[3358]: I0710 23:36:11.877784 3358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:36:11.885624 sudo[3372]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 23:36:11.886679 sudo[3372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 23:36:11.888935 kubelet[3358]: I0710 23:36:11.888202 3358 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:36:11.892626 kubelet[3358]: I0710 23:36:11.892565 3358 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:36:11.895233 kubelet[3358]: I0710 23:36:11.894434 3358 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:36:11.895233 kubelet[3358]: I0710 23:36:11.894503 3358 server.go:1287] "Started kubelet" Jul 10 23:36:11.908553 kubelet[3358]: I0710 23:36:11.908494 3358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:36:11.920968 kubelet[3358]: I0710 23:36:11.917957 3358 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:36:11.920968 kubelet[3358]: I0710 23:36:11.919671 3358 server.go:479] "Adding debug handlers to kubelet server" Jul 10 23:36:11.929057 kubelet[3358]: I0710 23:36:11.928945 3358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:36:11.929566 kubelet[3358]: I0710 23:36:11.929517 3358 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:36:11.932176 kubelet[3358]: I0710 23:36:11.932121 3358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:36:11.943948 kubelet[3358]: I0710 23:36:11.943894 3358 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:36:11.944834 kubelet[3358]: E0710 23:36:11.944292 3358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-228\" not found" Jul 10 23:36:11.951817 kubelet[3358]: I0710 23:36:11.948076 3358 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:36:11.951817 kubelet[3358]: I0710 23:36:11.948329 3358 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:36:11.974928 kubelet[3358]: I0710 23:36:11.974434 3358 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:36:11.983076 kubelet[3358]: I0710 23:36:11.982130 3358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:36:11.995938 kubelet[3358]: E0710 23:36:11.995882 3358 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:36:12.014671 kubelet[3358]: I0710 23:36:12.014473 3358 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:36:12.021295 kubelet[3358]: I0710 23:36:12.020905 3358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:36:12.027489 kubelet[3358]: I0710 23:36:12.027426 3358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:36:12.027489 kubelet[3358]: I0710 23:36:12.027483 3358 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 23:36:12.027687 kubelet[3358]: I0710 23:36:12.027531 3358 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:36:12.027687 kubelet[3358]: I0710 23:36:12.027552 3358 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 23:36:12.027687 kubelet[3358]: E0710 23:36:12.027628 3358 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:36:12.129290 kubelet[3358]: E0710 23:36:12.127863 3358 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 23:36:12.150799 kubelet[3358]: I0710 23:36:12.150752 3358 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:36:12.150799 kubelet[3358]: I0710 23:36:12.150786 3358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:36:12.150995 kubelet[3358]: I0710 23:36:12.150821 3358 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:12.151203 kubelet[3358]: I0710 23:36:12.151161 3358 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:36:12.151263 kubelet[3358]: I0710 23:36:12.151196 3358 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:36:12.151263 kubelet[3358]: I0710 23:36:12.151231 3358 policy_none.go:49] "None policy: Start" Jul 10 23:36:12.151263 kubelet[3358]: I0710 23:36:12.151249 3358 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:36:12.151440 kubelet[3358]: I0710 23:36:12.151269 3358 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:36:12.151517 kubelet[3358]: I0710 23:36:12.151482 3358 state_mem.go:75] "Updated machine memory state" Jul 10 23:36:12.173505 kubelet[3358]: I0710 23:36:12.173449 3358 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:36:12.176513 kubelet[3358]: I0710 23:36:12.175912 3358 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:36:12.177404 kubelet[3358]: I0710 23:36:12.176249 3358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:36:12.178424 kubelet[3358]: I0710 23:36:12.178011 3358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:36:12.189676 kubelet[3358]: E0710 23:36:12.189613 3358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:36:12.315146 kubelet[3358]: I0710 23:36:12.314104 3358 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-228" Jul 10 23:36:12.330768 kubelet[3358]: I0710 23:36:12.329372 3358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:12.331154 kubelet[3358]: I0710 23:36:12.331111 3358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.331441 kubelet[3358]: I0710 23:36:12.331404 3358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:12.350814 kubelet[3358]: I0710 23:36:12.350678 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.350814 kubelet[3358]: I0710 23:36:12.350812 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.351059 kubelet[3358]: I0710 23:36:12.350876 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-ca-certs\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:12.351059 kubelet[3358]: I0710 23:36:12.350919 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:12.351059 kubelet[3358]: I0710 23:36:12.350958 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.351059 kubelet[3358]: I0710 23:36:12.350996 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.351059 kubelet[3358]: I0710 23:36:12.351039 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12120d0681db33b0f5e792e3305f2dd9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-228\" (UID: \"12120d0681db33b0f5e792e3305f2dd9\") " pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:12.351298 kubelet[3358]: I0710 23:36:12.351074 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd3037350825581412b3f3e908b386ac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-228\" (UID: \"fd3037350825581412b3f3e908b386ac\") " pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:12.351298 kubelet[3358]: I0710 23:36:12.351110 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee701c37be1313f0e9c347f7b79cfc83-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-228\" (UID: \"ee701c37be1313f0e9c347f7b79cfc83\") " pod="kube-system/kube-controller-manager-ip-172-31-24-228" Jul 10 23:36:12.359940 kubelet[3358]: I0710 23:36:12.359885 3358 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-228" Jul 10 23:36:12.360145 kubelet[3358]: I0710 23:36:12.360012 3358 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-228" Jul 10 23:36:12.879869 kubelet[3358]: I0710 23:36:12.878808 3358 apiserver.go:52] "Watching apiserver" Jul 10 23:36:12.909917 sudo[3372]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:12.948694 kubelet[3358]: I0710 23:36:12.948607 3358 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:36:13.056076 kubelet[3358]: I0710 23:36:13.054566 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-228" podStartSLOduration=1.05451692 podStartE2EDuration="1.05451692s" podCreationTimestamp="2025-07-10 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:13.054240176 +0000 UTC m=+1.312896104" watchObservedRunningTime="2025-07-10 23:36:13.05451692 +0000 UTC m=+1.313172824" Jul 10 23:36:13.083852 kubelet[3358]: I0710 23:36:13.082667 3358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:13.086109 kubelet[3358]: I0710 23:36:13.086047 3358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:13.101995 kubelet[3358]: E0710 23:36:13.100699 3358 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-228\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-228" Jul 10 23:36:13.102467 kubelet[3358]: E0710 23:36:13.102406 3358 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-228" Jul 10 23:36:13.109037 kubelet[3358]: I0710 23:36:13.108629 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-228" podStartSLOduration=1.10860662 podStartE2EDuration="1.10860662s" podCreationTimestamp="2025-07-10 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:13.07935962 +0000 UTC m=+1.338015560" watchObservedRunningTime="2025-07-10 23:36:13.10860662 +0000 UTC m=+1.367262524" Jul 10 23:36:13.133100 kubelet[3358]: I0710 23:36:13.132362 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-228" podStartSLOduration=1.1323387679999999 podStartE2EDuration="1.132338768s" podCreationTimestamp="2025-07-10 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:13.108816704 +0000 UTC m=+1.367472632" watchObservedRunningTime="2025-07-10 23:36:13.132338768 +0000 UTC m=+1.390994672" Jul 10 23:36:15.212547 sudo[2317]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:15.235771 sshd[2316]: Connection closed by 147.75.109.163 port 56212 Jul 10 23:36:15.236023 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:15.241975 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:36:15.243153 systemd[1]: sshd@8-172.31.24.228:22-147.75.109.163:56212.service: Deactivated successfully. Jul 10 23:36:15.249982 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:36:15.250329 systemd[1]: session-9.scope: Consumed 12.356s CPU time, 265.4M memory peak. Jul 10 23:36:15.256661 systemd-logind[1938]: Removed session 9. Jul 10 23:36:16.809604 kubelet[3358]: I0710 23:36:16.809552 3358 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:36:16.810720 containerd[1952]: time="2025-07-10T23:36:16.810551139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:36:16.811321 kubelet[3358]: I0710 23:36:16.810976 3358 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:36:17.493463 systemd[1]: Created slice kubepods-besteffort-pod75132eee_c34d_4940_9900_28d596d212d8.slice - libcontainer container kubepods-besteffort-pod75132eee_c34d_4940_9900_28d596d212d8.slice. Jul 10 23:36:17.533650 systemd[1]: Created slice kubepods-burstable-podb2ad7c4f_a5b9_43ef_bc9a_85030dc02a32.slice - libcontainer container kubepods-burstable-podb2ad7c4f_a5b9_43ef_bc9a_85030dc02a32.slice. Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584523 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hubble-tls\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584590 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cni-path\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584632 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75132eee-c34d-4940-9900-28d596d212d8-xtables-lock\") pod \"kube-proxy-rgpvx\" (UID: \"75132eee-c34d-4940-9900-28d596d212d8\") " pod="kube-system/kube-proxy-rgpvx" Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584668 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-clustermesh-secrets\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584711 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-bpf-maps\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.585554 kubelet[3358]: I0710 23:36:17.584791 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-lib-modules\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.584852 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-kernel\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.584889 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-run\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.584924 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hostproc\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.584957 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-xtables-lock\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.584991 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-net\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586089 kubelet[3358]: I0710 23:36:17.585037 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-etc-cni-netd\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586378 kubelet[3358]: I0710 23:36:17.585073 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75132eee-c34d-4940-9900-28d596d212d8-lib-modules\") pod \"kube-proxy-rgpvx\" (UID: \"75132eee-c34d-4940-9900-28d596d212d8\") " pod="kube-system/kube-proxy-rgpvx" Jul 10 23:36:17.586378 kubelet[3358]: I0710 23:36:17.585107 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-cgroup\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586378 kubelet[3358]: I0710 23:36:17.585146 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-config-path\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586378 kubelet[3358]: I0710 23:36:17.585193 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmpx\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-kube-api-access-6xmpx\") pod \"cilium-8q847\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " pod="kube-system/cilium-8q847" Jul 10 23:36:17.586378 kubelet[3358]: I0710 23:36:17.585236 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75132eee-c34d-4940-9900-28d596d212d8-kube-proxy\") pod \"kube-proxy-rgpvx\" (UID: \"75132eee-c34d-4940-9900-28d596d212d8\") " pod="kube-system/kube-proxy-rgpvx" Jul 10 23:36:17.586629 kubelet[3358]: I0710 23:36:17.585273 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzh7q\" (UniqueName: \"kubernetes.io/projected/75132eee-c34d-4940-9900-28d596d212d8-kube-api-access-wzh7q\") pod \"kube-proxy-rgpvx\" (UID: \"75132eee-c34d-4940-9900-28d596d212d8\") " pod="kube-system/kube-proxy-rgpvx" Jul 10 23:36:17.810789 containerd[1952]: time="2025-07-10T23:36:17.810560608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgpvx,Uid:75132eee-c34d-4940-9900-28d596d212d8,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:17.857798 containerd[1952]: time="2025-07-10T23:36:17.857076100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8q847,Uid:b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:17.911000 containerd[1952]: time="2025-07-10T23:36:17.909943660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:17.911000 containerd[1952]: time="2025-07-10T23:36:17.910047580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:17.911000 containerd[1952]: time="2025-07-10T23:36:17.910077292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:17.911000 containerd[1952]: time="2025-07-10T23:36:17.910230364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:17.966327 systemd[1]: Started cri-containerd-e51164d78b265f4bb9bf08af07d4a64b292440e6e959bc04d7b73fb46d6373f6.scope - libcontainer container e51164d78b265f4bb9bf08af07d4a64b292440e6e959bc04d7b73fb46d6373f6. Jul 10 23:36:17.992430 kubelet[3358]: I0710 23:36:17.990412 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzbt\" (UniqueName: \"kubernetes.io/projected/bf1035e0-e9c8-4fed-af95-45f2d49e722d-kube-api-access-9bzbt\") pod \"cilium-operator-6c4d7847fc-rlrkc\" (UID: \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\") " pod="kube-system/cilium-operator-6c4d7847fc-rlrkc" Jul 10 23:36:17.992430 kubelet[3358]: I0710 23:36:17.990492 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf1035e0-e9c8-4fed-af95-45f2d49e722d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rlrkc\" (UID: \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\") " pod="kube-system/cilium-operator-6c4d7847fc-rlrkc" Jul 10 23:36:17.992433 systemd[1]: Created slice kubepods-besteffort-podbf1035e0_e9c8_4fed_af95_45f2d49e722d.slice - libcontainer container kubepods-besteffort-podbf1035e0_e9c8_4fed_af95_45f2d49e722d.slice. Jul 10 23:36:18.004837 containerd[1952]: time="2025-07-10T23:36:18.004131289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:18.004837 containerd[1952]: time="2025-07-10T23:36:18.004226269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:18.004837 containerd[1952]: time="2025-07-10T23:36:18.004261525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:18.004837 containerd[1952]: time="2025-07-10T23:36:18.004400125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:18.073012 systemd[1]: Started cri-containerd-16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c.scope - libcontainer container 16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c. Jul 10 23:36:18.149974 containerd[1952]: time="2025-07-10T23:36:18.149917285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgpvx,Uid:75132eee-c34d-4940-9900-28d596d212d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51164d78b265f4bb9bf08af07d4a64b292440e6e959bc04d7b73fb46d6373f6\"" Jul 10 23:36:18.172717 containerd[1952]: time="2025-07-10T23:36:18.172526905Z" level=info msg="CreateContainer within sandbox \"e51164d78b265f4bb9bf08af07d4a64b292440e6e959bc04d7b73fb46d6373f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:36:18.190551 containerd[1952]: time="2025-07-10T23:36:18.190121522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8q847,Uid:b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\"" Jul 10 23:36:18.195521 containerd[1952]: time="2025-07-10T23:36:18.195464618Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 23:36:18.212619 containerd[1952]: time="2025-07-10T23:36:18.212553734Z" level=info msg="CreateContainer within sandbox \"e51164d78b265f4bb9bf08af07d4a64b292440e6e959bc04d7b73fb46d6373f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb3d6823949ac1504b8b548f46f3070e0799d43e53254fbb99bde7426a25e0cb\"" Jul 10 23:36:18.214657 containerd[1952]: time="2025-07-10T23:36:18.214597790Z" level=info msg="StartContainer for \"eb3d6823949ac1504b8b548f46f3070e0799d43e53254fbb99bde7426a25e0cb\"" Jul 10 23:36:18.261244 systemd[1]: Started cri-containerd-eb3d6823949ac1504b8b548f46f3070e0799d43e53254fbb99bde7426a25e0cb.scope - libcontainer container eb3d6823949ac1504b8b548f46f3070e0799d43e53254fbb99bde7426a25e0cb. Jul 10 23:36:18.306127 containerd[1952]: time="2025-07-10T23:36:18.305881082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rlrkc,Uid:bf1035e0-e9c8-4fed-af95-45f2d49e722d,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:18.332345 containerd[1952]: time="2025-07-10T23:36:18.332115506Z" level=info msg="StartContainer for \"eb3d6823949ac1504b8b548f46f3070e0799d43e53254fbb99bde7426a25e0cb\" returns successfully" Jul 10 23:36:18.368370 containerd[1952]: time="2025-07-10T23:36:18.365803838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:18.368370 containerd[1952]: time="2025-07-10T23:36:18.365918822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:18.368370 containerd[1952]: time="2025-07-10T23:36:18.365950622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:18.368370 containerd[1952]: time="2025-07-10T23:36:18.366113210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:18.408073 systemd[1]: Started cri-containerd-84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c.scope - libcontainer container 84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c. Jul 10 23:36:18.512081 containerd[1952]: time="2025-07-10T23:36:18.511961271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rlrkc,Uid:bf1035e0-e9c8-4fed-af95-45f2d49e722d,Namespace:kube-system,Attempt:0,} returns sandbox id \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\"" Jul 10 23:36:22.057485 kubelet[3358]: I0710 23:36:22.057385 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rgpvx" podStartSLOduration=5.057238601 podStartE2EDuration="5.057238601s" podCreationTimestamp="2025-07-10 23:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:19.15819701 +0000 UTC m=+7.416852938" watchObservedRunningTime="2025-07-10 23:36:22.057238601 +0000 UTC m=+10.315894505" Jul 10 23:36:22.896438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267466222.mount: Deactivated successfully. Jul 10 23:36:25.447128 containerd[1952]: time="2025-07-10T23:36:25.447043786Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:25.451771 containerd[1952]: time="2025-07-10T23:36:25.450846178Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 23:36:25.454880 containerd[1952]: time="2025-07-10T23:36:25.454798450Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:25.461042 containerd[1952]: time="2025-07-10T23:36:25.460968646Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.26523476s" Jul 10 23:36:25.461042 containerd[1952]: time="2025-07-10T23:36:25.461038306Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 23:36:25.464212 containerd[1952]: time="2025-07-10T23:36:25.464161126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 23:36:25.466156 containerd[1952]: time="2025-07-10T23:36:25.465925114Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:36:25.495782 containerd[1952]: time="2025-07-10T23:36:25.495603034Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\"" Jul 10 23:36:25.498799 containerd[1952]: time="2025-07-10T23:36:25.497189362Z" level=info msg="StartContainer for \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\"" Jul 10 23:36:25.557056 systemd[1]: Started cri-containerd-8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5.scope - libcontainer container 8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5. Jul 10 23:36:25.606773 containerd[1952]: time="2025-07-10T23:36:25.606693994Z" level=info msg="StartContainer for \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\" returns successfully" Jul 10 23:36:25.639692 systemd[1]: cri-containerd-8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5.scope: Deactivated successfully. Jul 10 23:36:26.490461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5-rootfs.mount: Deactivated successfully. Jul 10 23:36:26.687459 containerd[1952]: time="2025-07-10T23:36:26.687340608Z" level=info msg="shim disconnected" id=8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5 namespace=k8s.io Jul 10 23:36:26.688101 containerd[1952]: time="2025-07-10T23:36:26.687475200Z" level=warning msg="cleaning up after shim disconnected" id=8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5 namespace=k8s.io Jul 10 23:36:26.688101 containerd[1952]: time="2025-07-10T23:36:26.687497832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:27.149651 containerd[1952]: time="2025-07-10T23:36:27.149567254Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:36:27.186456 containerd[1952]: time="2025-07-10T23:36:27.186283558Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\"" Jul 10 23:36:27.190504 containerd[1952]: time="2025-07-10T23:36:27.187778722Z" level=info msg="StartContainer for \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\"" Jul 10 23:36:27.283094 systemd[1]: Started cri-containerd-552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4.scope - libcontainer container 552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4. Jul 10 23:36:27.355258 containerd[1952]: time="2025-07-10T23:36:27.355110431Z" level=info msg="StartContainer for \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\" returns successfully" Jul 10 23:36:27.388548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:36:27.390410 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:36:27.392159 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:36:27.402630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:36:27.405578 systemd[1]: cri-containerd-552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4.scope: Deactivated successfully. Jul 10 23:36:27.453650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:36:27.478097 containerd[1952]: time="2025-07-10T23:36:27.478019796Z" level=info msg="shim disconnected" id=552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4 namespace=k8s.io Jul 10 23:36:27.478442 containerd[1952]: time="2025-07-10T23:36:27.478407696Z" level=warning msg="cleaning up after shim disconnected" id=552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4 namespace=k8s.io Jul 10 23:36:27.478590 containerd[1952]: time="2025-07-10T23:36:27.478559316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:27.488819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4-rootfs.mount: Deactivated successfully. Jul 10 23:36:28.084517 containerd[1952]: time="2025-07-10T23:36:28.084458927Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:28.086569 containerd[1952]: time="2025-07-10T23:36:28.086494475Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 23:36:28.087532 containerd[1952]: time="2025-07-10T23:36:28.087442331Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:28.091917 containerd[1952]: time="2025-07-10T23:36:28.091684943Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.627157901s" Jul 10 23:36:28.091917 containerd[1952]: time="2025-07-10T23:36:28.091761911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 23:36:28.097321 containerd[1952]: time="2025-07-10T23:36:28.097242383Z" level=info msg="CreateContainer within sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 23:36:28.119715 containerd[1952]: time="2025-07-10T23:36:28.119634551Z" level=info msg="CreateContainer within sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\"" Jul 10 23:36:28.120466 containerd[1952]: time="2025-07-10T23:36:28.120422207Z" level=info msg="StartContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\"" Jul 10 23:36:28.172897 containerd[1952]: time="2025-07-10T23:36:28.172523855Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:36:28.200637 systemd[1]: Started cri-containerd-4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e.scope - libcontainer container 4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e. Jul 10 23:36:28.221164 containerd[1952]: time="2025-07-10T23:36:28.221084711Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\"" Jul 10 23:36:28.224268 containerd[1952]: time="2025-07-10T23:36:28.224200343Z" level=info msg="StartContainer for \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\"" Jul 10 23:36:28.338053 systemd[1]: Started cri-containerd-b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0.scope - libcontainer container b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0. Jul 10 23:36:28.406957 containerd[1952]: time="2025-07-10T23:36:28.406586304Z" level=info msg="StartContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" returns successfully" Jul 10 23:36:28.426759 containerd[1952]: time="2025-07-10T23:36:28.426583152Z" level=info msg="StartContainer for \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\" returns successfully" Jul 10 23:36:28.441789 systemd[1]: cri-containerd-b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0.scope: Deactivated successfully. Jul 10 23:36:28.515246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0-rootfs.mount: Deactivated successfully. Jul 10 23:36:28.622589 containerd[1952]: time="2025-07-10T23:36:28.622054333Z" level=info msg="shim disconnected" id=b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0 namespace=k8s.io Jul 10 23:36:28.622589 containerd[1952]: time="2025-07-10T23:36:28.622137013Z" level=warning msg="cleaning up after shim disconnected" id=b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0 namespace=k8s.io Jul 10 23:36:28.622589 containerd[1952]: time="2025-07-10T23:36:28.622158169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:29.182639 containerd[1952]: time="2025-07-10T23:36:29.182570808Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:36:29.222374 containerd[1952]: time="2025-07-10T23:36:29.221478108Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\"" Jul 10 23:36:29.222642 containerd[1952]: time="2025-07-10T23:36:29.222523248Z" level=info msg="StartContainer for \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\"" Jul 10 23:36:29.305055 systemd[1]: Started cri-containerd-4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8.scope - libcontainer container 4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8. Jul 10 23:36:29.435370 containerd[1952]: time="2025-07-10T23:36:29.435233725Z" level=info msg="StartContainer for \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\" returns successfully" Jul 10 23:36:29.439479 systemd[1]: cri-containerd-4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8.scope: Deactivated successfully. Jul 10 23:36:29.511279 containerd[1952]: time="2025-07-10T23:36:29.510790454Z" level=info msg="shim disconnected" id=4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8 namespace=k8s.io Jul 10 23:36:29.511279 containerd[1952]: time="2025-07-10T23:36:29.510865202Z" level=warning msg="cleaning up after shim disconnected" id=4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8 namespace=k8s.io Jul 10 23:36:29.511279 containerd[1952]: time="2025-07-10T23:36:29.510884918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:29.513019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8-rootfs.mount: Deactivated successfully. Jul 10 23:36:29.588129 kubelet[3358]: I0710 23:36:29.587838 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rlrkc" podStartSLOduration=3.009295754 podStartE2EDuration="12.587812886s" podCreationTimestamp="2025-07-10 23:36:17 +0000 UTC" firstStartedPulling="2025-07-10 23:36:18.514652475 +0000 UTC m=+6.773308379" lastFinishedPulling="2025-07-10 23:36:28.093169607 +0000 UTC m=+16.351825511" observedRunningTime="2025-07-10 23:36:29.304375513 +0000 UTC m=+17.563031441" watchObservedRunningTime="2025-07-10 23:36:29.587812886 +0000 UTC m=+17.846468814" Jul 10 23:36:30.201129 containerd[1952]: time="2025-07-10T23:36:30.200638513Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:36:30.229337 containerd[1952]: time="2025-07-10T23:36:30.229284145Z" level=info msg="CreateContainer within sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\"" Jul 10 23:36:30.231054 containerd[1952]: time="2025-07-10T23:36:30.230628337Z" level=info msg="StartContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\"" Jul 10 23:36:30.303043 systemd[1]: Started cri-containerd-1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0.scope - libcontainer container 1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0. Jul 10 23:36:30.381564 containerd[1952]: time="2025-07-10T23:36:30.381474518Z" level=info msg="StartContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" returns successfully" Jul 10 23:36:30.541083 kubelet[3358]: I0710 23:36:30.539350 3358 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:36:30.608349 systemd[1]: Created slice kubepods-burstable-podf4e6e152_527b_413b_8961_8533e1833436.slice - libcontainer container kubepods-burstable-podf4e6e152_527b_413b_8961_8533e1833436.slice. Jul 10 23:36:30.628692 systemd[1]: Created slice kubepods-burstable-pod38b516c5_5ca0_4af6_90d4_55a4e1a9d7ea.slice - libcontainer container kubepods-burstable-pod38b516c5_5ca0_4af6_90d4_55a4e1a9d7ea.slice. Jul 10 23:36:30.695179 kubelet[3358]: I0710 23:36:30.694888 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwr7\" (UniqueName: \"kubernetes.io/projected/38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea-kube-api-access-8jwr7\") pod \"coredns-668d6bf9bc-8gcsz\" (UID: \"38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea\") " pod="kube-system/coredns-668d6bf9bc-8gcsz" Jul 10 23:36:30.695179 kubelet[3358]: I0710 23:36:30.694963 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4e6e152-527b-413b-8961-8533e1833436-config-volume\") pod \"coredns-668d6bf9bc-ck69d\" (UID: \"f4e6e152-527b-413b-8961-8533e1833436\") " pod="kube-system/coredns-668d6bf9bc-ck69d" Jul 10 23:36:30.695179 kubelet[3358]: I0710 23:36:30.695014 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwsf\" (UniqueName: \"kubernetes.io/projected/f4e6e152-527b-413b-8961-8533e1833436-kube-api-access-srwsf\") pod \"coredns-668d6bf9bc-ck69d\" (UID: \"f4e6e152-527b-413b-8961-8533e1833436\") " pod="kube-system/coredns-668d6bf9bc-ck69d" Jul 10 23:36:30.695179 kubelet[3358]: I0710 23:36:30.695053 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea-config-volume\") pod \"coredns-668d6bf9bc-8gcsz\" (UID: \"38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea\") " pod="kube-system/coredns-668d6bf9bc-8gcsz" Jul 10 23:36:30.919037 containerd[1952]: time="2025-07-10T23:36:30.918972125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ck69d,Uid:f4e6e152-527b-413b-8961-8533e1833436,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:30.942457 containerd[1952]: time="2025-07-10T23:36:30.942096293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8gcsz,Uid:38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:33.569180 (udev-worker)[4162]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:36:33.579441 (udev-worker)[4161]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:36:33.579910 systemd-networkd[1868]: cilium_host: Link UP Jul 10 23:36:33.580348 systemd-networkd[1868]: cilium_net: Link UP Jul 10 23:36:33.582573 systemd-networkd[1868]: cilium_net: Gained carrier Jul 10 23:36:33.584848 systemd-networkd[1868]: cilium_host: Gained carrier Jul 10 23:36:33.748951 (udev-worker)[4207]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:36:33.759621 systemd-networkd[1868]: cilium_vxlan: Link UP Jul 10 23:36:33.759641 systemd-networkd[1868]: cilium_vxlan: Gained carrier Jul 10 23:36:33.946971 systemd-networkd[1868]: cilium_net: Gained IPv6LL Jul 10 23:36:34.351785 kernel: NET: Registered PF_ALG protocol family Jul 10 23:36:34.387084 systemd-networkd[1868]: cilium_host: Gained IPv6LL Jul 10 23:36:34.834994 systemd-networkd[1868]: cilium_vxlan: Gained IPv6LL Jul 10 23:36:35.690506 systemd-networkd[1868]: lxc_health: Link UP Jul 10 23:36:35.711690 systemd-networkd[1868]: lxc_health: Gained carrier Jul 10 23:36:35.884764 kubelet[3358]: I0710 23:36:35.884115 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8q847" podStartSLOduration=11.614645377 podStartE2EDuration="18.884087085s" podCreationTimestamp="2025-07-10 23:36:17 +0000 UTC" firstStartedPulling="2025-07-10 23:36:18.193532462 +0000 UTC m=+6.452188378" lastFinishedPulling="2025-07-10 23:36:25.46297417 +0000 UTC m=+13.721630086" observedRunningTime="2025-07-10 23:36:31.22837229 +0000 UTC m=+19.487028194" watchObservedRunningTime="2025-07-10 23:36:35.884087085 +0000 UTC m=+24.142742977" Jul 10 23:36:36.058057 systemd-networkd[1868]: lxcc28a23ad0904: Link UP Jul 10 23:36:36.067177 kernel: eth0: renamed from tmp0d49c Jul 10 23:36:36.086526 systemd-networkd[1868]: lxcc28a23ad0904: Gained carrier Jul 10 23:36:36.091086 systemd-networkd[1868]: lxce161240055ab: Link UP Jul 10 23:36:36.112850 kernel: eth0: renamed from tmpd2939 Jul 10 23:36:36.119664 systemd-networkd[1868]: lxce161240055ab: Gained carrier Jul 10 23:36:36.125383 (udev-worker)[4208]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:36:37.139014 systemd-networkd[1868]: lxce161240055ab: Gained IPv6LL Jul 10 23:36:37.266966 systemd-networkd[1868]: lxc_health: Gained IPv6LL Jul 10 23:36:37.331047 systemd-networkd[1868]: lxcc28a23ad0904: Gained IPv6LL Jul 10 23:36:40.242413 ntpd[1930]: Listen normally on 7 cilium_host 192.168.0.43:123 Jul 10 23:36:40.243342 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 7 cilium_host 192.168.0.43:123 Jul 10 23:36:40.243342 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 8 cilium_net [fe80::dce2:3fff:fe50:133a%4]:123 Jul 10 23:36:40.242573 ntpd[1930]: Listen normally on 8 cilium_net [fe80::dce2:3fff:fe50:133a%4]:123 Jul 10 23:36:40.242658 ntpd[1930]: Listen normally on 9 cilium_host [fe80::cceb:dbff:fe9c:23fe%5]:123 Jul 10 23:36:40.244445 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 9 cilium_host [fe80::cceb:dbff:fe9c:23fe%5]:123 Jul 10 23:36:40.244445 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 10 cilium_vxlan [fe80::c8db:60ff:feb3:7705%6]:123 Jul 10 23:36:40.244445 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 11 lxc_health [fe80::44c6:dfff:fe7e:916a%8]:123 Jul 10 23:36:40.244445 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 12 lxcc28a23ad0904 [fe80::fcb4:aff:fe0f:17cd%10]:123 Jul 10 23:36:40.244445 ntpd[1930]: 10 Jul 23:36:40 ntpd[1930]: Listen normally on 13 lxce161240055ab [fe80::7404:24ff:fed8:412d%12]:123 Jul 10 23:36:40.243813 ntpd[1930]: Listen normally on 10 cilium_vxlan [fe80::c8db:60ff:feb3:7705%6]:123 Jul 10 23:36:40.243895 ntpd[1930]: Listen normally on 11 lxc_health [fe80::44c6:dfff:fe7e:916a%8]:123 Jul 10 23:36:40.243963 ntpd[1930]: Listen normally on 12 lxcc28a23ad0904 [fe80::fcb4:aff:fe0f:17cd%10]:123 Jul 10 23:36:40.244042 ntpd[1930]: Listen normally on 13 lxce161240055ab [fe80::7404:24ff:fed8:412d%12]:123 Jul 10 23:36:44.432310 containerd[1952]: time="2025-07-10T23:36:44.431112160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:44.432310 containerd[1952]: time="2025-07-10T23:36:44.431245264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:44.432310 containerd[1952]: time="2025-07-10T23:36:44.431282188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:44.432310 containerd[1952]: time="2025-07-10T23:36:44.431449324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:44.487991 systemd[1]: Started cri-containerd-d2939c50529ab59d926eea5b6c8f4453b25e134da772df44e46f1df75239fd04.scope - libcontainer container d2939c50529ab59d926eea5b6c8f4453b25e134da772df44e46f1df75239fd04. Jul 10 23:36:44.582282 containerd[1952]: time="2025-07-10T23:36:44.582144773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:44.582823 containerd[1952]: time="2025-07-10T23:36:44.582519509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:44.582823 containerd[1952]: time="2025-07-10T23:36:44.582607733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:44.583351 containerd[1952]: time="2025-07-10T23:36:44.583153697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:44.642462 containerd[1952]: time="2025-07-10T23:36:44.642268517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8gcsz,Uid:38b516c5-5ca0-4af6-90d4-55a4e1a9d7ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2939c50529ab59d926eea5b6c8f4453b25e134da772df44e46f1df75239fd04\"" Jul 10 23:36:44.644236 systemd[1]: Started cri-containerd-0d49c9e426c5488c9aa224cfd6dfb31f924ced5c27acb89714c4cabe9439a14a.scope - libcontainer container 0d49c9e426c5488c9aa224cfd6dfb31f924ced5c27acb89714c4cabe9439a14a. Jul 10 23:36:44.658530 containerd[1952]: time="2025-07-10T23:36:44.658331381Z" level=info msg="CreateContainer within sandbox \"d2939c50529ab59d926eea5b6c8f4453b25e134da772df44e46f1df75239fd04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:36:44.684115 containerd[1952]: time="2025-07-10T23:36:44.682960673Z" level=info msg="CreateContainer within sandbox \"d2939c50529ab59d926eea5b6c8f4453b25e134da772df44e46f1df75239fd04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b049115243c6949830157f9afbdab7889301477df38259591b9854933c49241\"" Jul 10 23:36:44.686312 containerd[1952]: time="2025-07-10T23:36:44.686245145Z" level=info msg="StartContainer for \"4b049115243c6949830157f9afbdab7889301477df38259591b9854933c49241\"" Jul 10 23:36:44.760920 systemd[1]: Started cri-containerd-4b049115243c6949830157f9afbdab7889301477df38259591b9854933c49241.scope - libcontainer container 4b049115243c6949830157f9afbdab7889301477df38259591b9854933c49241. Jul 10 23:36:44.809286 containerd[1952]: time="2025-07-10T23:36:44.809187798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ck69d,Uid:f4e6e152-527b-413b-8961-8533e1833436,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d49c9e426c5488c9aa224cfd6dfb31f924ced5c27acb89714c4cabe9439a14a\"" Jul 10 23:36:44.823506 containerd[1952]: time="2025-07-10T23:36:44.823207962Z" level=info msg="CreateContainer within sandbox \"0d49c9e426c5488c9aa224cfd6dfb31f924ced5c27acb89714c4cabe9439a14a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:36:44.851113 containerd[1952]: time="2025-07-10T23:36:44.850642638Z" level=info msg="CreateContainer within sandbox \"0d49c9e426c5488c9aa224cfd6dfb31f924ced5c27acb89714c4cabe9439a14a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f8543520106719e827d46a06bc2eb8705c0d2394e82730521e8625a05156da7\"" Jul 10 23:36:44.857784 containerd[1952]: time="2025-07-10T23:36:44.857586702Z" level=info msg="StartContainer for \"8f8543520106719e827d46a06bc2eb8705c0d2394e82730521e8625a05156da7\"" Jul 10 23:36:44.926197 containerd[1952]: time="2025-07-10T23:36:44.926104290Z" level=info msg="StartContainer for \"4b049115243c6949830157f9afbdab7889301477df38259591b9854933c49241\" returns successfully" Jul 10 23:36:44.976978 systemd[1]: Started cri-containerd-8f8543520106719e827d46a06bc2eb8705c0d2394e82730521e8625a05156da7.scope - libcontainer container 8f8543520106719e827d46a06bc2eb8705c0d2394e82730521e8625a05156da7. Jul 10 23:36:45.060808 containerd[1952]: time="2025-07-10T23:36:45.060035415Z" level=info msg="StartContainer for \"8f8543520106719e827d46a06bc2eb8705c0d2394e82730521e8625a05156da7\" returns successfully" Jul 10 23:36:45.270753 kubelet[3358]: I0710 23:36:45.269669 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8gcsz" podStartSLOduration=28.269643928 podStartE2EDuration="28.269643928s" podCreationTimestamp="2025-07-10 23:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:45.265084648 +0000 UTC m=+33.523740576" watchObservedRunningTime="2025-07-10 23:36:45.269643928 +0000 UTC m=+33.528299832" Jul 10 23:36:45.296538 kubelet[3358]: I0710 23:36:45.296043 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ck69d" podStartSLOduration=28.295996216 podStartE2EDuration="28.295996216s" podCreationTimestamp="2025-07-10 23:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:45.292568032 +0000 UTC m=+33.551223972" watchObservedRunningTime="2025-07-10 23:36:45.295996216 +0000 UTC m=+33.554652120" Jul 10 23:37:02.968365 systemd[1]: Started sshd@9-172.31.24.228:22-147.75.109.163:57338.service - OpenSSH per-connection server daemon (147.75.109.163:57338). Jul 10 23:37:03.167571 sshd[4738]: Accepted publickey for core from 147.75.109.163 port 57338 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:03.170515 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:03.180529 systemd-logind[1938]: New session 10 of user core. Jul 10 23:37:03.190007 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:37:03.468250 sshd[4740]: Connection closed by 147.75.109.163 port 57338 Jul 10 23:37:03.468000 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:03.475915 systemd[1]: sshd@9-172.31.24.228:22-147.75.109.163:57338.service: Deactivated successfully. Jul 10 23:37:03.481204 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:37:03.482998 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:37:03.486168 systemd-logind[1938]: Removed session 10. Jul 10 23:37:08.512338 systemd[1]: Started sshd@10-172.31.24.228:22-147.75.109.163:53562.service - OpenSSH per-connection server daemon (147.75.109.163:53562). Jul 10 23:37:08.695123 sshd[4754]: Accepted publickey for core from 147.75.109.163 port 53562 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:08.697695 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:08.706042 systemd-logind[1938]: New session 11 of user core. Jul 10 23:37:08.717044 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:37:08.958634 sshd[4756]: Connection closed by 147.75.109.163 port 53562 Jul 10 23:37:08.959621 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:08.966611 systemd[1]: sshd@10-172.31.24.228:22-147.75.109.163:53562.service: Deactivated successfully. Jul 10 23:37:08.971553 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:37:08.974174 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:37:08.976614 systemd-logind[1938]: Removed session 11. Jul 10 23:37:14.009314 systemd[1]: Started sshd@11-172.31.24.228:22-147.75.109.163:53566.service - OpenSSH per-connection server daemon (147.75.109.163:53566). Jul 10 23:37:14.202670 sshd[4771]: Accepted publickey for core from 147.75.109.163 port 53566 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:14.205482 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:14.215439 systemd-logind[1938]: New session 12 of user core. Jul 10 23:37:14.221128 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:37:14.465063 sshd[4773]: Connection closed by 147.75.109.163 port 53566 Jul 10 23:37:14.465978 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:14.473480 systemd[1]: sshd@11-172.31.24.228:22-147.75.109.163:53566.service: Deactivated successfully. Jul 10 23:37:14.476975 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:37:14.478433 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:37:14.480889 systemd-logind[1938]: Removed session 12. Jul 10 23:37:19.507268 systemd[1]: Started sshd@12-172.31.24.228:22-147.75.109.163:32898.service - OpenSSH per-connection server daemon (147.75.109.163:32898). Jul 10 23:37:19.698524 sshd[4788]: Accepted publickey for core from 147.75.109.163 port 32898 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:19.701079 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:19.709516 systemd-logind[1938]: New session 13 of user core. Jul 10 23:37:19.721019 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:37:19.968836 sshd[4790]: Connection closed by 147.75.109.163 port 32898 Jul 10 23:37:19.969707 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:19.977152 systemd[1]: sshd@12-172.31.24.228:22-147.75.109.163:32898.service: Deactivated successfully. Jul 10 23:37:19.982892 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:37:19.986185 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:37:19.988023 systemd-logind[1938]: Removed session 13. Jul 10 23:37:25.017278 systemd[1]: Started sshd@13-172.31.24.228:22-147.75.109.163:32906.service - OpenSSH per-connection server daemon (147.75.109.163:32906). Jul 10 23:37:25.198617 sshd[4802]: Accepted publickey for core from 147.75.109.163 port 32906 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:25.201193 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:25.211289 systemd-logind[1938]: New session 14 of user core. Jul 10 23:37:25.218035 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:37:25.458254 sshd[4805]: Connection closed by 147.75.109.163 port 32906 Jul 10 23:37:25.460073 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:25.466370 systemd[1]: sshd@13-172.31.24.228:22-147.75.109.163:32906.service: Deactivated successfully. Jul 10 23:37:25.472918 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:37:25.474691 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:37:25.476675 systemd-logind[1938]: Removed session 14. Jul 10 23:37:25.498267 systemd[1]: Started sshd@14-172.31.24.228:22-147.75.109.163:32914.service - OpenSSH per-connection server daemon (147.75.109.163:32914). Jul 10 23:37:25.690121 sshd[4818]: Accepted publickey for core from 147.75.109.163 port 32914 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:25.692842 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:25.702109 systemd-logind[1938]: New session 15 of user core. Jul 10 23:37:25.711006 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:37:26.036470 sshd[4820]: Connection closed by 147.75.109.163 port 32914 Jul 10 23:37:26.036070 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:26.047601 systemd[1]: sshd@14-172.31.24.228:22-147.75.109.163:32914.service: Deactivated successfully. Jul 10 23:37:26.055371 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:37:26.059362 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:37:26.086495 systemd[1]: Started sshd@15-172.31.24.228:22-147.75.109.163:53474.service - OpenSSH per-connection server daemon (147.75.109.163:53474). Jul 10 23:37:26.091008 systemd-logind[1938]: Removed session 15. Jul 10 23:37:26.291415 sshd[4829]: Accepted publickey for core from 147.75.109.163 port 53474 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:26.294875 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:26.303236 systemd-logind[1938]: New session 16 of user core. Jul 10 23:37:26.312032 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:37:26.561339 sshd[4832]: Connection closed by 147.75.109.163 port 53474 Jul 10 23:37:26.562656 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:26.568882 systemd[1]: sshd@15-172.31.24.228:22-147.75.109.163:53474.service: Deactivated successfully. Jul 10 23:37:26.573810 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:37:26.575917 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:37:26.578387 systemd-logind[1938]: Removed session 16. Jul 10 23:37:31.606267 systemd[1]: Started sshd@16-172.31.24.228:22-147.75.109.163:53478.service - OpenSSH per-connection server daemon (147.75.109.163:53478). Jul 10 23:37:31.785968 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 53478 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:31.788502 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:31.797104 systemd-logind[1938]: New session 17 of user core. Jul 10 23:37:31.810030 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:37:32.059895 sshd[4847]: Connection closed by 147.75.109.163 port 53478 Jul 10 23:37:32.060821 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:32.066308 systemd[1]: sshd@16-172.31.24.228:22-147.75.109.163:53478.service: Deactivated successfully. Jul 10 23:37:32.069573 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:37:32.073416 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:37:32.076288 systemd-logind[1938]: Removed session 17. Jul 10 23:37:37.107452 systemd[1]: Started sshd@17-172.31.24.228:22-147.75.109.163:36824.service - OpenSSH per-connection server daemon (147.75.109.163:36824). Jul 10 23:37:37.289903 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 36824 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:37.291628 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:37.302878 systemd-logind[1938]: New session 18 of user core. Jul 10 23:37:37.311073 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:37:37.550444 sshd[4861]: Connection closed by 147.75.109.163 port 36824 Jul 10 23:37:37.551636 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:37.557512 systemd[1]: sshd@17-172.31.24.228:22-147.75.109.163:36824.service: Deactivated successfully. Jul 10 23:37:37.562306 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:37:37.565053 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:37:37.567288 systemd-logind[1938]: Removed session 18. Jul 10 23:37:42.597019 systemd[1]: Started sshd@18-172.31.24.228:22-147.75.109.163:36836.service - OpenSSH per-connection server daemon (147.75.109.163:36836). Jul 10 23:37:42.787698 sshd[4872]: Accepted publickey for core from 147.75.109.163 port 36836 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:42.790370 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:42.800158 systemd-logind[1938]: New session 19 of user core. Jul 10 23:37:42.805999 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:37:43.058921 sshd[4874]: Connection closed by 147.75.109.163 port 36836 Jul 10 23:37:43.059837 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:43.067621 systemd[1]: sshd@18-172.31.24.228:22-147.75.109.163:36836.service: Deactivated successfully. Jul 10 23:37:43.072132 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:37:43.074849 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:37:43.076749 systemd-logind[1938]: Removed session 19. Jul 10 23:37:43.104369 systemd[1]: Started sshd@19-172.31.24.228:22-147.75.109.163:36848.service - OpenSSH per-connection server daemon (147.75.109.163:36848). Jul 10 23:37:43.292156 sshd[4886]: Accepted publickey for core from 147.75.109.163 port 36848 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:43.294897 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:43.305830 systemd-logind[1938]: New session 20 of user core. Jul 10 23:37:43.312029 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:37:43.639961 sshd[4888]: Connection closed by 147.75.109.163 port 36848 Jul 10 23:37:43.640425 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:43.647921 systemd[1]: sshd@19-172.31.24.228:22-147.75.109.163:36848.service: Deactivated successfully. Jul 10 23:37:43.653405 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:37:43.656181 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:37:43.658195 systemd-logind[1938]: Removed session 20. Jul 10 23:37:43.682243 systemd[1]: Started sshd@20-172.31.24.228:22-147.75.109.163:36856.service - OpenSSH per-connection server daemon (147.75.109.163:36856). Jul 10 23:37:43.870473 sshd[4898]: Accepted publickey for core from 147.75.109.163 port 36856 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:43.872976 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:43.882173 systemd-logind[1938]: New session 21 of user core. Jul 10 23:37:43.888004 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:37:45.236816 sshd[4900]: Connection closed by 147.75.109.163 port 36856 Jul 10 23:37:45.239923 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:45.249902 systemd[1]: sshd@20-172.31.24.228:22-147.75.109.163:36856.service: Deactivated successfully. Jul 10 23:37:45.256369 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:37:45.263806 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:37:45.287192 systemd[1]: Started sshd@21-172.31.24.228:22-147.75.109.163:36864.service - OpenSSH per-connection server daemon (147.75.109.163:36864). Jul 10 23:37:45.288910 systemd-logind[1938]: Removed session 21. Jul 10 23:37:45.482285 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 36864 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:45.484933 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:45.493393 systemd-logind[1938]: New session 22 of user core. Jul 10 23:37:45.503005 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:37:45.995202 sshd[4919]: Connection closed by 147.75.109.163 port 36864 Jul 10 23:37:45.996390 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:46.004238 systemd-logind[1938]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:37:46.005379 systemd[1]: sshd@21-172.31.24.228:22-147.75.109.163:36864.service: Deactivated successfully. Jul 10 23:37:46.013419 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:37:46.040360 systemd-logind[1938]: Removed session 22. Jul 10 23:37:46.047265 systemd[1]: Started sshd@22-172.31.24.228:22-147.75.109.163:48192.service - OpenSSH per-connection server daemon (147.75.109.163:48192). Jul 10 23:37:46.239285 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 48192 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:46.241992 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:46.251556 systemd-logind[1938]: New session 23 of user core. Jul 10 23:37:46.263045 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 23:37:46.496970 sshd[4931]: Connection closed by 147.75.109.163 port 48192 Jul 10 23:37:46.498051 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:46.504426 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 23:37:46.508013 systemd[1]: sshd@22-172.31.24.228:22-147.75.109.163:48192.service: Deactivated successfully. Jul 10 23:37:46.512623 systemd-logind[1938]: Session 23 logged out. Waiting for processes to exit. Jul 10 23:37:46.515462 systemd-logind[1938]: Removed session 23. Jul 10 23:37:51.541273 systemd[1]: Started sshd@23-172.31.24.228:22-147.75.109.163:48208.service - OpenSSH per-connection server daemon (147.75.109.163:48208). Jul 10 23:37:51.734785 sshd[4945]: Accepted publickey for core from 147.75.109.163 port 48208 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:51.737272 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:51.750405 systemd-logind[1938]: New session 24 of user core. Jul 10 23:37:51.755034 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 23:37:51.997173 sshd[4947]: Connection closed by 147.75.109.163 port 48208 Jul 10 23:37:51.999107 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:52.008934 systemd-logind[1938]: Session 24 logged out. Waiting for processes to exit. Jul 10 23:37:52.010279 systemd[1]: sshd@23-172.31.24.228:22-147.75.109.163:48208.service: Deactivated successfully. Jul 10 23:37:52.014181 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 23:37:52.016773 systemd-logind[1938]: Removed session 24. Jul 10 23:37:57.041285 systemd[1]: Started sshd@24-172.31.24.228:22-147.75.109.163:48952.service - OpenSSH per-connection server daemon (147.75.109.163:48952). Jul 10 23:37:57.224604 sshd[4961]: Accepted publickey for core from 147.75.109.163 port 48952 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:37:57.227136 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:57.235922 systemd-logind[1938]: New session 25 of user core. Jul 10 23:37:57.249068 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 23:37:57.489853 sshd[4963]: Connection closed by 147.75.109.163 port 48952 Jul 10 23:37:57.490853 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:57.497519 systemd[1]: sshd@24-172.31.24.228:22-147.75.109.163:48952.service: Deactivated successfully. Jul 10 23:37:57.502011 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 23:37:57.504985 systemd-logind[1938]: Session 25 logged out. Waiting for processes to exit. Jul 10 23:37:57.507142 systemd-logind[1938]: Removed session 25. Jul 10 23:38:02.538310 systemd[1]: Started sshd@25-172.31.24.228:22-147.75.109.163:48968.service - OpenSSH per-connection server daemon (147.75.109.163:48968). Jul 10 23:38:02.741620 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 48968 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:02.745159 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:02.753404 systemd-logind[1938]: New session 26 of user core. Jul 10 23:38:02.759075 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 23:38:03.004386 sshd[4976]: Connection closed by 147.75.109.163 port 48968 Jul 10 23:38:03.004924 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:03.017427 systemd[1]: sshd@25-172.31.24.228:22-147.75.109.163:48968.service: Deactivated successfully. Jul 10 23:38:03.025170 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 23:38:03.027520 systemd-logind[1938]: Session 26 logged out. Waiting for processes to exit. Jul 10 23:38:03.033220 systemd-logind[1938]: Removed session 26. Jul 10 23:38:08.048696 systemd[1]: Started sshd@26-172.31.24.228:22-147.75.109.163:41536.service - OpenSSH per-connection server daemon (147.75.109.163:41536). Jul 10 23:38:08.250684 sshd[4988]: Accepted publickey for core from 147.75.109.163 port 41536 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:08.253085 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:08.263368 systemd-logind[1938]: New session 27 of user core. Jul 10 23:38:08.271175 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 23:38:08.527568 sshd[4990]: Connection closed by 147.75.109.163 port 41536 Jul 10 23:38:08.528570 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:08.536566 systemd[1]: sshd@26-172.31.24.228:22-147.75.109.163:41536.service: Deactivated successfully. Jul 10 23:38:08.541547 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 23:38:08.544586 systemd-logind[1938]: Session 27 logged out. Waiting for processes to exit. Jul 10 23:38:08.566648 systemd-logind[1938]: Removed session 27. Jul 10 23:38:08.576318 systemd[1]: Started sshd@27-172.31.24.228:22-147.75.109.163:41548.service - OpenSSH per-connection server daemon (147.75.109.163:41548). Jul 10 23:38:08.765718 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 41548 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:08.768457 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:08.778837 systemd-logind[1938]: New session 28 of user core. Jul 10 23:38:08.791083 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 23:38:11.131191 containerd[1952]: time="2025-07-10T23:38:11.130481451Z" level=info msg="StopContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" with timeout 30 (s)" Jul 10 23:38:11.137837 containerd[1952]: time="2025-07-10T23:38:11.135446427Z" level=info msg="Stop container \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" with signal terminated" Jul 10 23:38:11.181137 systemd[1]: run-containerd-runc-k8s.io-1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0-runc.nXyE82.mount: Deactivated successfully. Jul 10 23:38:11.191397 systemd[1]: cri-containerd-4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e.scope: Deactivated successfully. Jul 10 23:38:11.202313 containerd[1952]: time="2025-07-10T23:38:11.202157043Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:38:11.226493 containerd[1952]: time="2025-07-10T23:38:11.226149699Z" level=info msg="StopContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" with timeout 2 (s)" Jul 10 23:38:11.227290 containerd[1952]: time="2025-07-10T23:38:11.226804671Z" level=info msg="Stop container \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" with signal terminated" Jul 10 23:38:11.262050 systemd-networkd[1868]: lxc_health: Link DOWN Jul 10 23:38:11.262076 systemd-networkd[1868]: lxc_health: Lost carrier Jul 10 23:38:11.284433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e-rootfs.mount: Deactivated successfully. Jul 10 23:38:11.295767 containerd[1952]: time="2025-07-10T23:38:11.295615359Z" level=info msg="shim disconnected" id=4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e namespace=k8s.io Jul 10 23:38:11.295767 containerd[1952]: time="2025-07-10T23:38:11.295710387Z" level=warning msg="cleaning up after shim disconnected" id=4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e namespace=k8s.io Jul 10 23:38:11.295767 containerd[1952]: time="2025-07-10T23:38:11.295753599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:11.299229 systemd[1]: cri-containerd-1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0.scope: Deactivated successfully. Jul 10 23:38:11.300517 systemd[1]: cri-containerd-1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0.scope: Consumed 14.715s CPU time, 127.4M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:11.332342 containerd[1952]: time="2025-07-10T23:38:11.331870276Z" level=info msg="StopContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" returns successfully" Jul 10 23:38:11.333196 containerd[1952]: time="2025-07-10T23:38:11.332905084Z" level=info msg="StopPodSandbox for \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\"" Jul 10 23:38:11.333196 containerd[1952]: time="2025-07-10T23:38:11.332977720Z" level=info msg="Container to stop \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.340199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c-shm.mount: Deactivated successfully. Jul 10 23:38:11.361246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0-rootfs.mount: Deactivated successfully. Jul 10 23:38:11.364622 systemd[1]: cri-containerd-84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c.scope: Deactivated successfully. Jul 10 23:38:11.376809 containerd[1952]: time="2025-07-10T23:38:11.376554928Z" level=info msg="shim disconnected" id=1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0 namespace=k8s.io Jul 10 23:38:11.376809 containerd[1952]: time="2025-07-10T23:38:11.376637320Z" level=warning msg="cleaning up after shim disconnected" id=1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0 namespace=k8s.io Jul 10 23:38:11.376809 containerd[1952]: time="2025-07-10T23:38:11.376657840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:11.430228 containerd[1952]: time="2025-07-10T23:38:11.430145308Z" level=info msg="shim disconnected" id=84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c namespace=k8s.io Jul 10 23:38:11.431088 containerd[1952]: time="2025-07-10T23:38:11.430540456Z" level=warning msg="cleaning up after shim disconnected" id=84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c namespace=k8s.io Jul 10 23:38:11.431088 containerd[1952]: time="2025-07-10T23:38:11.430872304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:11.433370 containerd[1952]: time="2025-07-10T23:38:11.433022956Z" level=info msg="StopContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" returns successfully" Jul 10 23:38:11.435851 containerd[1952]: time="2025-07-10T23:38:11.435627940Z" level=info msg="StopPodSandbox for \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\"" Jul 10 23:38:11.435851 containerd[1952]: time="2025-07-10T23:38:11.435769204Z" level=info msg="Container to stop \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.435851 containerd[1952]: time="2025-07-10T23:38:11.435798580Z" level=info msg="Container to stop \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.436329 containerd[1952]: time="2025-07-10T23:38:11.435821140Z" level=info msg="Container to stop \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.436329 containerd[1952]: time="2025-07-10T23:38:11.436189492Z" level=info msg="Container to stop \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.436329 containerd[1952]: time="2025-07-10T23:38:11.436282396Z" level=info msg="Container to stop \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:11.449567 systemd[1]: cri-containerd-16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c.scope: Deactivated successfully. Jul 10 23:38:11.468768 containerd[1952]: time="2025-07-10T23:38:11.468642136Z" level=info msg="TearDown network for sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" successfully" Jul 10 23:38:11.468768 containerd[1952]: time="2025-07-10T23:38:11.468700612Z" level=info msg="StopPodSandbox for \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" returns successfully" Jul 10 23:38:11.480695 kubelet[3358]: I0710 23:38:11.480481 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c" Jul 10 23:38:11.519393 containerd[1952]: time="2025-07-10T23:38:11.518283436Z" level=info msg="shim disconnected" id=16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c namespace=k8s.io Jul 10 23:38:11.519393 containerd[1952]: time="2025-07-10T23:38:11.518502256Z" level=warning msg="cleaning up after shim disconnected" id=16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c namespace=k8s.io Jul 10 23:38:11.519393 containerd[1952]: time="2025-07-10T23:38:11.518530348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:11.544625 containerd[1952]: time="2025-07-10T23:38:11.544486217Z" level=info msg="TearDown network for sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" successfully" Jul 10 23:38:11.544625 containerd[1952]: time="2025-07-10T23:38:11.544540997Z" level=info msg="StopPodSandbox for \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" returns successfully" Jul 10 23:38:11.594773 kubelet[3358]: I0710 23:38:11.594168 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf1035e0-e9c8-4fed-af95-45f2d49e722d-cilium-config-path\") pod \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\" (UID: \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\") " Jul 10 23:38:11.594773 kubelet[3358]: I0710 23:38:11.594247 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bzbt\" (UniqueName: \"kubernetes.io/projected/bf1035e0-e9c8-4fed-af95-45f2d49e722d-kube-api-access-9bzbt\") pod \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\" (UID: \"bf1035e0-e9c8-4fed-af95-45f2d49e722d\") " Jul 10 23:38:11.599779 kubelet[3358]: I0710 23:38:11.599657 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1035e0-e9c8-4fed-af95-45f2d49e722d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf1035e0-e9c8-4fed-af95-45f2d49e722d" (UID: "bf1035e0-e9c8-4fed-af95-45f2d49e722d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:11.601238 kubelet[3358]: I0710 23:38:11.601163 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1035e0-e9c8-4fed-af95-45f2d49e722d-kube-api-access-9bzbt" (OuterVolumeSpecName: "kube-api-access-9bzbt") pod "bf1035e0-e9c8-4fed-af95-45f2d49e722d" (UID: "bf1035e0-e9c8-4fed-af95-45f2d49e722d"). InnerVolumeSpecName "kube-api-access-9bzbt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695474 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hubble-tls\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695546 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-run\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695594 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-config-path\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695650 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-bpf-maps\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695685 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-net\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.696870 kubelet[3358]: I0710 23:38:11.695719 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-etc-cni-netd\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695812 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xmpx\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-kube-api-access-6xmpx\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695855 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cni-path\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695895 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-clustermesh-secrets\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695928 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-kernel\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695962 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hostproc\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697281 kubelet[3358]: I0710 23:38:11.695995 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-cgroup\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696028 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-xtables-lock\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696065 3358 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-lib-modules\") pod \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\" (UID: \"b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32\") " Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696164 3358 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bzbt\" (UniqueName: \"kubernetes.io/projected/bf1035e0-e9c8-4fed-af95-45f2d49e722d-kube-api-access-9bzbt\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696194 3358 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf1035e0-e9c8-4fed-af95-45f2d49e722d-cilium-config-path\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696242 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.697616 kubelet[3358]: I0710 23:38:11.696303 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.699274 kubelet[3358]: I0710 23:38:11.699209 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.699445 kubelet[3358]: I0710 23:38:11.699296 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.699445 kubelet[3358]: I0710 23:38:11.699339 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.700798 kubelet[3358]: I0710 23:38:11.700303 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.700798 kubelet[3358]: I0710 23:38:11.700390 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.703762 kubelet[3358]: I0710 23:38:11.703682 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.703993 kubelet[3358]: I0710 23:38:11.703863 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.703993 kubelet[3358]: I0710 23:38:11.703909 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:11.705787 kubelet[3358]: I0710 23:38:11.705058 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-kube-api-access-6xmpx" (OuterVolumeSpecName: "kube-api-access-6xmpx") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "kube-api-access-6xmpx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:11.709103 kubelet[3358]: I0710 23:38:11.709018 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:11.709315 kubelet[3358]: I0710 23:38:11.709264 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:11.711299 kubelet[3358]: I0710 23:38:11.711229 3358 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" (UID: "b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:38:11.796508 kubelet[3358]: I0710 23:38:11.796459 3358 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-net\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.796809 kubelet[3358]: I0710 23:38:11.796780 3358 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-bpf-maps\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.796926 3358 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-etc-cni-netd\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.796956 3358 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-clustermesh-secrets\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.796980 3358 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-host-proc-sys-kernel\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.797002 3358 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6xmpx\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-kube-api-access-6xmpx\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.797023 3358 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cni-path\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.797048 3358 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hostproc\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.797068 3358 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-cgroup\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797245 kubelet[3358]: I0710 23:38:11.797097 3358 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-lib-modules\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797678 kubelet[3358]: I0710 23:38:11.797118 3358 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-xtables-lock\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797678 kubelet[3358]: I0710 23:38:11.797161 3358 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-hubble-tls\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797678 kubelet[3358]: I0710 23:38:11.797185 3358 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-run\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:11.797678 kubelet[3358]: I0710 23:38:11.797211 3358 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32-cilium-config-path\") on node \"ip-172-31-24-228\" DevicePath \"\"" Jul 10 23:38:12.001964 kubelet[3358]: I0710 23:38:12.001516 3358 scope.go:117] "RemoveContainer" containerID="4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8" Jul 10 23:38:12.005716 containerd[1952]: time="2025-07-10T23:38:12.005443059Z" level=info msg="RemoveContainer for \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\"" Jul 10 23:38:12.015707 containerd[1952]: time="2025-07-10T23:38:12.015616467Z" level=info msg="RemoveContainer for \"4940d7c41b1fe198f8e0e58704948332e99f192f4e76383bc80c445e832da6f8\" returns successfully" Jul 10 23:38:12.016126 kubelet[3358]: I0710 23:38:12.016075 3358 scope.go:117] "RemoveContainer" containerID="1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0" Jul 10 23:38:12.019824 containerd[1952]: time="2025-07-10T23:38:12.019773723Z" level=info msg="RemoveContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\"" Jul 10 23:38:12.027474 containerd[1952]: time="2025-07-10T23:38:12.027410799Z" level=info msg="RemoveContainer for \"1c997e58183d1a22dbd47e9d4898c7d8766f438010c2cbfdb7894a2cf6395eb0\" returns successfully" Jul 10 23:38:12.029377 kubelet[3358]: I0710 23:38:12.028040 3358 scope.go:117] "RemoveContainer" containerID="8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5" Jul 10 23:38:12.031955 containerd[1952]: time="2025-07-10T23:38:12.031853727Z" level=info msg="RemoveContainer for \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\"" Jul 10 23:38:12.041797 containerd[1952]: time="2025-07-10T23:38:12.040909287Z" level=info msg="RemoveContainer for \"8de10017386276846bc37d9c3fc4d45c1f166bb9c1a2b7e453963e55d84b00d5\" returns successfully" Jul 10 23:38:12.042233 kubelet[3358]: I0710 23:38:12.042198 3358 scope.go:117] "RemoveContainer" containerID="552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4" Jul 10 23:38:12.047797 containerd[1952]: time="2025-07-10T23:38:12.047701827Z" level=info msg="RemoveContainer for \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\"" Jul 10 23:38:12.050132 systemd[1]: Removed slice kubepods-besteffort-podbf1035e0_e9c8_4fed_af95_45f2d49e722d.slice - libcontainer container kubepods-besteffort-podbf1035e0_e9c8_4fed_af95_45f2d49e722d.slice. Jul 10 23:38:12.056534 containerd[1952]: time="2025-07-10T23:38:12.056239191Z" level=info msg="RemoveContainer for \"552425cb3cc8aaa62aeea11f697714730e32cfacd23d1e8b831948766afae3c4\" returns successfully" Jul 10 23:38:12.058930 kubelet[3358]: I0710 23:38:12.058872 3358 scope.go:117] "RemoveContainer" containerID="4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e" Jul 10 23:38:12.062813 containerd[1952]: time="2025-07-10T23:38:12.061538991Z" level=info msg="RemoveContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\"" Jul 10 23:38:12.063316 systemd[1]: Removed slice kubepods-burstable-podb2ad7c4f_a5b9_43ef_bc9a_85030dc02a32.slice - libcontainer container kubepods-burstable-podb2ad7c4f_a5b9_43ef_bc9a_85030dc02a32.slice. Jul 10 23:38:12.063561 systemd[1]: kubepods-burstable-podb2ad7c4f_a5b9_43ef_bc9a_85030dc02a32.slice: Consumed 14.889s CPU time, 127.8M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:12.069724 containerd[1952]: time="2025-07-10T23:38:12.069647199Z" level=info msg="RemoveContainer for \"4ce43644ab47119e7d5643038d0b0fa5fa0d42d187514786b95e5391f09d150e\" returns successfully" Jul 10 23:38:12.070274 kubelet[3358]: I0710 23:38:12.070204 3358 scope.go:117] "RemoveContainer" containerID="b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0" Jul 10 23:38:12.073001 containerd[1952]: time="2025-07-10T23:38:12.072917103Z" level=info msg="RemoveContainer for \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\"" Jul 10 23:38:12.080724 containerd[1952]: time="2025-07-10T23:38:12.080633799Z" level=info msg="RemoveContainer for \"b6194ec96a46737f645a14f8b9786cd5b78ded0207ce968b2ee207d7680840a0\" returns successfully" Jul 10 23:38:12.083180 containerd[1952]: time="2025-07-10T23:38:12.083109987Z" level=info msg="StopPodSandbox for \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\"" Jul 10 23:38:12.083335 containerd[1952]: time="2025-07-10T23:38:12.083269071Z" level=info msg="TearDown network for sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" successfully" Jul 10 23:38:12.083335 containerd[1952]: time="2025-07-10T23:38:12.083294727Z" level=info msg="StopPodSandbox for \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" returns successfully" Jul 10 23:38:12.084894 containerd[1952]: time="2025-07-10T23:38:12.084149559Z" level=info msg="RemovePodSandbox for \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\"" Jul 10 23:38:12.084894 containerd[1952]: time="2025-07-10T23:38:12.084196875Z" level=info msg="Forcibly stopping sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\"" Jul 10 23:38:12.084894 containerd[1952]: time="2025-07-10T23:38:12.084296067Z" level=info msg="TearDown network for sandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" successfully" Jul 10 23:38:12.090390 containerd[1952]: time="2025-07-10T23:38:12.090317247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:38:12.090524 containerd[1952]: time="2025-07-10T23:38:12.090413139Z" level=info msg="RemovePodSandbox \"84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c\" returns successfully" Jul 10 23:38:12.091591 containerd[1952]: time="2025-07-10T23:38:12.091542567Z" level=info msg="StopPodSandbox for \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\"" Jul 10 23:38:12.091939 containerd[1952]: time="2025-07-10T23:38:12.091903695Z" level=info msg="TearDown network for sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" successfully" Jul 10 23:38:12.092075 containerd[1952]: time="2025-07-10T23:38:12.092046963Z" level=info msg="StopPodSandbox for \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" returns successfully" Jul 10 23:38:12.092968 containerd[1952]: time="2025-07-10T23:38:12.092789751Z" level=info msg="RemovePodSandbox for \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\"" Jul 10 23:38:12.092968 containerd[1952]: time="2025-07-10T23:38:12.092842983Z" level=info msg="Forcibly stopping sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\"" Jul 10 23:38:12.093776 containerd[1952]: time="2025-07-10T23:38:12.093517059Z" level=info msg="TearDown network for sandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" successfully" Jul 10 23:38:12.099683 containerd[1952]: time="2025-07-10T23:38:12.099586551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:38:12.099683 containerd[1952]: time="2025-07-10T23:38:12.099682755Z" level=info msg="RemovePodSandbox \"16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c\" returns successfully" Jul 10 23:38:12.159864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84343f8616e0e96ef4a02b5f48dd4660de1d2881497b89dcf3ac60e1d6948a2c-rootfs.mount: Deactivated successfully. Jul 10 23:38:12.160057 systemd[1]: var-lib-kubelet-pods-bf1035e0\x2de9c8\x2d4fed\x2daf95\x2d45f2d49e722d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9bzbt.mount: Deactivated successfully. Jul 10 23:38:12.160203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c-rootfs.mount: Deactivated successfully. Jul 10 23:38:12.160338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16c3a31acddd23816e7506a95ae137bef54490bdee04715ee22d68a798338d1c-shm.mount: Deactivated successfully. Jul 10 23:38:12.160483 systemd[1]: var-lib-kubelet-pods-b2ad7c4f\x2da5b9\x2d43ef\x2dbc9a\x2d85030dc02a32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6xmpx.mount: Deactivated successfully. Jul 10 23:38:12.160637 systemd[1]: var-lib-kubelet-pods-b2ad7c4f\x2da5b9\x2d43ef\x2dbc9a\x2d85030dc02a32-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 23:38:12.160812 systemd[1]: var-lib-kubelet-pods-b2ad7c4f\x2da5b9\x2d43ef\x2dbc9a\x2d85030dc02a32-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 23:38:12.225621 kubelet[3358]: E0710 23:38:12.225509 3358 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:38:13.048961 sshd[5004]: Connection closed by 147.75.109.163 port 41548 Jul 10 23:38:13.050094 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:13.057441 systemd[1]: sshd@27-172.31.24.228:22-147.75.109.163:41548.service: Deactivated successfully. Jul 10 23:38:13.063287 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 23:38:13.064372 systemd[1]: session-28.scope: Consumed 1.566s CPU time, 23.6M memory peak. Jul 10 23:38:13.066364 systemd-logind[1938]: Session 28 logged out. Waiting for processes to exit. Jul 10 23:38:13.069253 systemd-logind[1938]: Removed session 28. Jul 10 23:38:13.094813 systemd[1]: Started sshd@28-172.31.24.228:22-147.75.109.163:41564.service - OpenSSH per-connection server daemon (147.75.109.163:41564). Jul 10 23:38:13.285586 sshd[5173]: Accepted publickey for core from 147.75.109.163 port 41564 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:13.288486 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:13.298547 systemd-logind[1938]: New session 29 of user core. Jul 10 23:38:13.305081 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 23:38:14.033949 kubelet[3358]: I0710 23:38:14.033846 3358 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" path="/var/lib/kubelet/pods/b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32/volumes" Jul 10 23:38:14.036641 kubelet[3358]: I0710 23:38:14.036503 3358 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1035e0-e9c8-4fed-af95-45f2d49e722d" path="/var/lib/kubelet/pods/bf1035e0-e9c8-4fed-af95-45f2d49e722d/volumes" Jul 10 23:38:14.242458 ntpd[1930]: Deleting interface #11 lxc_health, fe80::44c6:dfff:fe7e:916a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=94 secs Jul 10 23:38:14.243091 ntpd[1930]: 10 Jul 23:38:14 ntpd[1930]: Deleting interface #11 lxc_health, fe80::44c6:dfff:fe7e:916a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=94 secs Jul 10 23:38:14.473615 sshd[5175]: Connection closed by 147.75.109.163 port 41564 Jul 10 23:38:14.480545 kubelet[3358]: I0710 23:38:14.477489 3358 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf1035e0-e9c8-4fed-af95-45f2d49e722d" containerName="cilium-operator" Jul 10 23:38:14.480545 kubelet[3358]: I0710 23:38:14.477572 3358 memory_manager.go:355] "RemoveStaleState removing state" podUID="b2ad7c4f-a5b9-43ef-bc9a-85030dc02a32" containerName="cilium-agent" Jul 10 23:38:14.477957 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:14.495300 systemd[1]: sshd@28-172.31.24.228:22-147.75.109.163:41564.service: Deactivated successfully. Jul 10 23:38:14.505126 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 23:38:14.510250 systemd-logind[1938]: Session 29 logged out. Waiting for processes to exit. Jul 10 23:38:14.545989 systemd[1]: Started sshd@29-172.31.24.228:22-147.75.109.163:41578.service - OpenSSH per-connection server daemon (147.75.109.163:41578). Jul 10 23:38:14.555926 systemd-logind[1938]: Removed session 29. Jul 10 23:38:14.562238 systemd[1]: Created slice kubepods-burstable-podfa4c3b8b_ce38_44ed_97ab_905dfaa019e4.slice - libcontainer container kubepods-burstable-podfa4c3b8b_ce38_44ed_97ab_905dfaa019e4.slice. Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619128 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-hostproc\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619221 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-host-proc-sys-net\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619280 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-lib-modules\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619318 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-xtables-lock\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619356 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-cilium-config-path\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620225 kubelet[3358]: I0710 23:38:14.619393 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-cilium-ipsec-secrets\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.619436 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-hubble-tls\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.620311 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-host-proc-sys-kernel\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.620375 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-clustermesh-secrets\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.620443 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-cilium-run\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.620483 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-cilium-cgroup\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.620695 kubelet[3358]: I0710 23:38:14.620525 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-cni-path\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.621138 kubelet[3358]: I0710 23:38:14.620567 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-etc-cni-netd\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.621138 kubelet[3358]: I0710 23:38:14.620606 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq96t\" (UniqueName: \"kubernetes.io/projected/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-kube-api-access-xq96t\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.621138 kubelet[3358]: I0710 23:38:14.620650 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa4c3b8b-ce38-44ed-97ab-905dfaa019e4-bpf-maps\") pod \"cilium-nj4l2\" (UID: \"fa4c3b8b-ce38-44ed-97ab-905dfaa019e4\") " pod="kube-system/cilium-nj4l2" Jul 10 23:38:14.710725 kubelet[3358]: I0710 23:38:14.708825 3358 setters.go:602] "Node became not ready" node="ip-172-31-24-228" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T23:38:14Z","lastTransitionTime":"2025-07-10T23:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 23:38:14.814522 sshd[5185]: Accepted publickey for core from 147.75.109.163 port 41578 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:14.832265 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:14.842452 systemd-logind[1938]: New session 30 of user core. Jul 10 23:38:14.848131 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 23:38:14.891185 containerd[1952]: time="2025-07-10T23:38:14.891132081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nj4l2,Uid:fa4c3b8b-ce38-44ed-97ab-905dfaa019e4,Namespace:kube-system,Attempt:0,}" Jul 10 23:38:14.939068 containerd[1952]: time="2025-07-10T23:38:14.937873821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:38:14.939068 containerd[1952]: time="2025-07-10T23:38:14.938650545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:38:14.939068 containerd[1952]: time="2025-07-10T23:38:14.938680593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:38:14.939068 containerd[1952]: time="2025-07-10T23:38:14.938888481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:38:14.970114 systemd[1]: Started cri-containerd-9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb.scope - libcontainer container 9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb. Jul 10 23:38:14.971354 sshd[5192]: Connection closed by 147.75.109.163 port 41578 Jul 10 23:38:14.972365 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:14.988367 systemd[1]: sshd@29-172.31.24.228:22-147.75.109.163:41578.service: Deactivated successfully. Jul 10 23:38:15.000086 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 23:38:15.003184 systemd-logind[1938]: Session 30 logged out. Waiting for processes to exit. Jul 10 23:38:15.033273 systemd[1]: Started sshd@30-172.31.24.228:22-147.75.109.163:41592.service - OpenSSH per-connection server daemon (147.75.109.163:41592). Jul 10 23:38:15.036861 systemd-logind[1938]: Removed session 30. Jul 10 23:38:15.078837 containerd[1952]: time="2025-07-10T23:38:15.077525538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nj4l2,Uid:fa4c3b8b-ce38-44ed-97ab-905dfaa019e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\"" Jul 10 23:38:15.088747 containerd[1952]: time="2025-07-10T23:38:15.088143390Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:38:15.113965 containerd[1952]: time="2025-07-10T23:38:15.113800842Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6\"" Jul 10 23:38:15.114828 containerd[1952]: time="2025-07-10T23:38:15.114673326Z" level=info msg="StartContainer for \"9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6\"" Jul 10 23:38:15.165135 systemd[1]: Started cri-containerd-9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6.scope - libcontainer container 9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6. Jul 10 23:38:15.221418 containerd[1952]: time="2025-07-10T23:38:15.221331343Z" level=info msg="StartContainer for \"9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6\" returns successfully" Jul 10 23:38:15.241128 systemd[1]: cri-containerd-9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6.scope: Deactivated successfully. Jul 10 23:38:15.245315 sshd[5233]: Accepted publickey for core from 147.75.109.163 port 41592 ssh2: RSA SHA256:/TRTB1Lh8fb1zu9PzlCsILTQ+p1WtcrGB8tMWhqyWCA Jul 10 23:38:15.252117 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:15.266869 systemd-logind[1938]: New session 31 of user core. Jul 10 23:38:15.278239 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 10 23:38:15.314786 containerd[1952]: time="2025-07-10T23:38:15.314397103Z" level=info msg="shim disconnected" id=9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6 namespace=k8s.io Jul 10 23:38:15.314786 containerd[1952]: time="2025-07-10T23:38:15.314479219Z" level=warning msg="cleaning up after shim disconnected" id=9a972f17c61d294acd16cb5e7ab1b19d28e1b7790fdcca878968160f128e34c6 namespace=k8s.io Jul 10 23:38:15.314786 containerd[1952]: time="2025-07-10T23:38:15.314498731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:15.526762 containerd[1952]: time="2025-07-10T23:38:15.526542032Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:38:15.556317 containerd[1952]: time="2025-07-10T23:38:15.556202060Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac\"" Jul 10 23:38:15.557830 containerd[1952]: time="2025-07-10T23:38:15.557762816Z" level=info msg="StartContainer for \"6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac\"" Jul 10 23:38:15.614074 systemd[1]: Started cri-containerd-6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac.scope - libcontainer container 6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac. Jul 10 23:38:15.665356 containerd[1952]: time="2025-07-10T23:38:15.665286585Z" level=info msg="StartContainer for \"6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac\" returns successfully" Jul 10 23:38:15.681265 systemd[1]: cri-containerd-6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac.scope: Deactivated successfully. Jul 10 23:38:15.730958 containerd[1952]: time="2025-07-10T23:38:15.730836993Z" level=info msg="shim disconnected" id=6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac namespace=k8s.io Jul 10 23:38:15.731495 containerd[1952]: time="2025-07-10T23:38:15.731250297Z" level=warning msg="cleaning up after shim disconnected" id=6281d45bc587e1496febec975fd56dccb5f02a510ab7108e4eaacdad2e59bfac namespace=k8s.io Jul 10 23:38:15.731495 containerd[1952]: time="2025-07-10T23:38:15.731286081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:16.531526 containerd[1952]: time="2025-07-10T23:38:16.530944485Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:38:16.571322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860977327.mount: Deactivated successfully. Jul 10 23:38:16.578831 containerd[1952]: time="2025-07-10T23:38:16.578769934Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44\"" Jul 10 23:38:16.581952 containerd[1952]: time="2025-07-10T23:38:16.581874430Z" level=info msg="StartContainer for \"8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44\"" Jul 10 23:38:16.655047 systemd[1]: Started cri-containerd-8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44.scope - libcontainer container 8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44. Jul 10 23:38:16.721826 containerd[1952]: time="2025-07-10T23:38:16.721270654Z" level=info msg="StartContainer for \"8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44\" returns successfully" Jul 10 23:38:16.723323 systemd[1]: cri-containerd-8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44.scope: Deactivated successfully. Jul 10 23:38:16.738488 systemd[1]: run-containerd-runc-k8s.io-8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44-runc.JroZU6.mount: Deactivated successfully. Jul 10 23:38:16.791111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44-rootfs.mount: Deactivated successfully. Jul 10 23:38:16.801312 containerd[1952]: time="2025-07-10T23:38:16.801165803Z" level=info msg="shim disconnected" id=8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44 namespace=k8s.io Jul 10 23:38:16.801964 containerd[1952]: time="2025-07-10T23:38:16.801386063Z" level=warning msg="cleaning up after shim disconnected" id=8507dece931329738e1f7d79433df88f12bf4de091c239a19440a9a86cf52b44 namespace=k8s.io Jul 10 23:38:16.801964 containerd[1952]: time="2025-07-10T23:38:16.801410867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:16.830932 containerd[1952]: time="2025-07-10T23:38:16.830846507Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:38:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:38:17.227690 kubelet[3358]: E0710 23:38:17.227610 3358 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:38:17.542098 containerd[1952]: time="2025-07-10T23:38:17.541470838Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:38:17.595549 containerd[1952]: time="2025-07-10T23:38:17.595463327Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72\"" Jul 10 23:38:17.597153 containerd[1952]: time="2025-07-10T23:38:17.597049895Z" level=info msg="StartContainer for \"4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72\"" Jul 10 23:38:17.662101 systemd[1]: Started cri-containerd-4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72.scope - libcontainer container 4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72. Jul 10 23:38:17.717880 systemd[1]: cri-containerd-4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72.scope: Deactivated successfully. Jul 10 23:38:17.722483 containerd[1952]: time="2025-07-10T23:38:17.722306507Z" level=info msg="StartContainer for \"4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72\" returns successfully" Jul 10 23:38:17.775616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72-rootfs.mount: Deactivated successfully. Jul 10 23:38:17.785141 containerd[1952]: time="2025-07-10T23:38:17.785044752Z" level=info msg="shim disconnected" id=4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72 namespace=k8s.io Jul 10 23:38:17.785141 containerd[1952]: time="2025-07-10T23:38:17.785128332Z" level=warning msg="cleaning up after shim disconnected" id=4451363610cf8b579190302bf70b6386f77fc39906c3346013e5ce8b2ae49b72 namespace=k8s.io Jul 10 23:38:17.785759 containerd[1952]: time="2025-07-10T23:38:17.785150148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:18.551299 containerd[1952]: time="2025-07-10T23:38:18.550509611Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:38:18.594072 containerd[1952]: time="2025-07-10T23:38:18.593956800Z" level=info msg="CreateContainer within sandbox \"9c8a0f0c035776f47fffc99753b396d3b310e7da0cad0c8e8c83738e69552bfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43\"" Jul 10 23:38:18.594948 containerd[1952]: time="2025-07-10T23:38:18.594895716Z" level=info msg="StartContainer for \"80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43\"" Jul 10 23:38:18.650166 systemd[1]: Started cri-containerd-80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43.scope - libcontainer container 80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43. Jul 10 23:38:18.723520 containerd[1952]: time="2025-07-10T23:38:18.723331020Z" level=info msg="StartContainer for \"80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43\" returns successfully" Jul 10 23:38:19.816939 systemd[1]: run-containerd-runc-k8s.io-80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43-runc.LpQjtV.mount: Deactivated successfully. Jul 10 23:38:19.851952 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 23:38:19.939690 kubelet[3358]: E0710 23:38:19.939439 3358 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35340->127.0.0.1:46017: write tcp 127.0.0.1:35340->127.0.0.1:46017: write: broken pipe Jul 10 23:38:24.202072 systemd-networkd[1868]: lxc_health: Link UP Jul 10 23:38:24.254838 (udev-worker)[6037]: Network interface NamePolicy= disabled on kernel command line. Jul 10 23:38:24.280001 systemd-networkd[1868]: lxc_health: Gained carrier Jul 10 23:38:24.624100 kubelet[3358]: E0710 23:38:24.623937 3358 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35358->127.0.0.1:46017: write tcp 127.0.0.1:35358->127.0.0.1:46017: write: connection reset by peer Jul 10 23:38:24.927402 kubelet[3358]: I0710 23:38:24.926170 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nj4l2" podStartSLOduration=10.926145463 podStartE2EDuration="10.926145463s" podCreationTimestamp="2025-07-10 23:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:38:19.631749985 +0000 UTC m=+127.890405913" watchObservedRunningTime="2025-07-10 23:38:24.926145463 +0000 UTC m=+133.184801391" Jul 10 23:38:25.490956 systemd-networkd[1868]: lxc_health: Gained IPv6LL Jul 10 23:38:26.797611 systemd[1]: run-containerd-runc-k8s.io-80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43-runc.SacKLE.mount: Deactivated successfully. Jul 10 23:38:28.242507 ntpd[1930]: Listen normally on 14 lxc_health [fe80::f055:94ff:fe33:b479%14]:123 Jul 10 23:38:28.243117 ntpd[1930]: 10 Jul 23:38:28 ntpd[1930]: Listen normally on 14 lxc_health [fe80::f055:94ff:fe33:b479%14]:123 Jul 10 23:38:29.150499 systemd[1]: run-containerd-runc-k8s.io-80e215c8dad9329216637eeaa64f26a8652bd99697be4290f7b9b44b34a2ac43-runc.6LYpy2.mount: Deactivated successfully. Jul 10 23:38:31.570788 sshd[5291]: Connection closed by 147.75.109.163 port 41592 Jul 10 23:38:31.571960 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:31.578781 systemd-logind[1938]: Session 31 logged out. Waiting for processes to exit. Jul 10 23:38:31.579806 systemd[1]: session-31.scope: Deactivated successfully. Jul 10 23:38:31.584552 systemd[1]: sshd@30-172.31.24.228:22-147.75.109.163:41592.service: Deactivated successfully. Jul 10 23:38:31.598768 systemd-logind[1938]: Removed session 31.