Jul 6 23:08:30.225420 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 6 23:08:30.226099 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:08:30.226132 kernel: KASLR disabled due to lack of seed Jul 6 23:08:30.226149 kernel: efi: EFI v2.7 by EDK II Jul 6 23:08:30.226165 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 6 23:08:30.226180 kernel: secureboot: Secure boot disabled Jul 6 23:08:30.226197 kernel: ACPI: Early table checksum verification disabled Jul 6 23:08:30.226212 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 6 23:08:30.226228 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:08:30.226244 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:08:30.226263 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 6 23:08:30.226279 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:08:30.226294 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 6 23:08:30.226310 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 6 23:08:30.226328 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 6 23:08:30.226348 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:08:30.226365 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 6 23:08:30.226382 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 6 23:08:30.226398 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 6 23:08:30.226414 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 6 23:08:30.226430 kernel: printk: bootconsole [uart0] enabled Jul 6 23:08:30.226447 kernel: NUMA: Failed to initialise from firmware Jul 6 23:08:30.226493 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:08:30.226516 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 6 23:08:30.226533 kernel: Zone ranges: Jul 6 23:08:30.226550 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:08:30.226571 kernel: DMA32 empty Jul 6 23:08:30.226588 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 6 23:08:30.226604 kernel: Movable zone start for each node Jul 6 23:08:30.226620 kernel: Early memory node ranges Jul 6 23:08:30.226636 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 6 23:08:30.226652 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 6 23:08:30.226668 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 6 23:08:30.226685 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 6 23:08:30.226701 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 6 23:08:30.226717 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 6 23:08:30.226733 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 6 23:08:30.226749 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 6 23:08:30.226769 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:08:30.226786 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 6 23:08:30.226809 kernel: psci: probing for conduit method from ACPI. Jul 6 23:08:30.226826 kernel: psci: PSCIv1.0 detected in firmware. Jul 6 23:08:30.226843 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:08:30.226864 kernel: psci: Trusted OS migration not required Jul 6 23:08:30.226881 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:08:30.226898 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 6 23:08:30.226915 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:08:30.226932 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:08:30.226950 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:08:30.226967 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:08:30.226984 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:08:30.227001 kernel: CPU features: detected: Spectre-v2 Jul 6 23:08:30.227018 kernel: CPU features: detected: Spectre-v3a Jul 6 23:08:30.227035 kernel: CPU features: detected: Spectre-BHB Jul 6 23:08:30.227056 kernel: CPU features: detected: ARM erratum 1742098 Jul 6 23:08:30.227073 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 6 23:08:30.227090 kernel: alternatives: applying boot alternatives Jul 6 23:08:30.227109 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:08:30.227128 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:08:30.227145 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:08:30.227162 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:08:30.227179 kernel: Fallback order for Node 0: 0 Jul 6 23:08:30.227196 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 6 23:08:30.227213 kernel: Policy zone: Normal Jul 6 23:08:30.227230 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:08:30.227251 kernel: software IO TLB: area num 2. Jul 6 23:08:30.227269 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 6 23:08:30.227286 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) Jul 6 23:08:30.227304 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:08:30.227321 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:08:30.227339 kernel: rcu: RCU event tracing is enabled. Jul 6 23:08:30.227356 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:08:30.227374 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:08:30.227391 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:08:30.227409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:08:30.227426 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:08:30.227446 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:08:30.228325 kernel: GICv3: 96 SPIs implemented Jul 6 23:08:30.228352 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:08:30.228369 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:08:30.228387 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:08:30.228404 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 6 23:08:30.228421 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 6 23:08:30.228438 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:08:30.228456 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:08:30.228578 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 6 23:08:30.228597 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 6 23:08:30.228615 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 6 23:08:30.228639 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:08:30.228656 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 6 23:08:30.228674 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 6 23:08:30.228691 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 6 23:08:30.228708 kernel: Console: colour dummy device 80x25 Jul 6 23:08:30.228726 kernel: printk: console [tty1] enabled Jul 6 23:08:30.228744 kernel: ACPI: Core revision 20230628 Jul 6 23:08:30.228761 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 6 23:08:30.228779 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:08:30.228796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:08:30.228818 kernel: landlock: Up and running. Jul 6 23:08:30.228836 kernel: SELinux: Initializing. Jul 6 23:08:30.228853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:08:30.228871 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:08:30.228889 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:08:30.228906 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:08:30.228924 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:08:30.228942 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:08:30.228959 kernel: Platform MSI: ITS@0x10080000 domain created Jul 6 23:08:30.228981 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 6 23:08:30.228998 kernel: Remapping and enabling EFI services. Jul 6 23:08:30.229015 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:08:30.229033 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:08:30.229050 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 6 23:08:30.229068 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 6 23:08:30.229086 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 6 23:08:30.229103 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:08:30.229120 kernel: SMP: Total of 2 processors activated. Jul 6 23:08:30.229142 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:08:30.229160 kernel: CPU features: detected: 32-bit EL1 Support Jul 6 23:08:30.229188 kernel: CPU features: detected: CRC32 instructions Jul 6 23:08:30.229210 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:08:30.229228 kernel: alternatives: applying system-wide alternatives Jul 6 23:08:30.229246 kernel: devtmpfs: initialized Jul 6 23:08:30.229264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:08:30.229283 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:08:30.229302 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:08:30.229324 kernel: SMBIOS 3.0.0 present. Jul 6 23:08:30.229342 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 6 23:08:30.229360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:08:30.229379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:08:30.229397 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:08:30.229416 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:08:30.229434 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:08:30.229457 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jul 6 23:08:30.229496 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:08:30.229516 kernel: cpuidle: using governor menu Jul 6 23:08:30.229535 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:08:30.229553 kernel: ASID allocator initialised with 65536 entries Jul 6 23:08:30.229572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:08:30.229590 kernel: Serial: AMBA PL011 UART driver Jul 6 23:08:30.229608 kernel: Modules: 17744 pages in range for non-PLT usage Jul 6 23:08:30.229627 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:08:30.229651 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:08:30.229670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:08:30.229688 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:08:30.229707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:08:30.229725 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:08:30.229743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:08:30.229762 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:08:30.229780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:08:30.229798 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:08:30.229821 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:08:30.229839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:08:30.229858 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:08:30.229876 kernel: ACPI: Interpreter enabled Jul 6 23:08:30.229894 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:08:30.229912 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:08:30.229930 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 6 23:08:30.230221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:08:30.230431 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:08:30.230761 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:08:30.231065 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 6 23:08:30.231278 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 6 23:08:30.231304 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 6 23:08:30.231323 kernel: acpiphp: Slot [1] registered Jul 6 23:08:30.231342 kernel: acpiphp: Slot [2] registered Jul 6 23:08:30.231361 kernel: acpiphp: Slot [3] registered Jul 6 23:08:30.231380 kernel: acpiphp: Slot [4] registered Jul 6 23:08:30.231407 kernel: acpiphp: Slot [5] registered Jul 6 23:08:30.231426 kernel: acpiphp: Slot [6] registered Jul 6 23:08:30.231444 kernel: acpiphp: Slot [7] registered Jul 6 23:08:30.231480 kernel: acpiphp: Slot [8] registered Jul 6 23:08:30.231505 kernel: acpiphp: Slot [9] registered Jul 6 23:08:30.231524 kernel: acpiphp: Slot [10] registered Jul 6 23:08:30.231542 kernel: acpiphp: Slot [11] registered Jul 6 23:08:30.231561 kernel: acpiphp: Slot [12] registered Jul 6 23:08:30.231579 kernel: acpiphp: Slot [13] registered Jul 6 23:08:30.231606 kernel: acpiphp: Slot [14] registered Jul 6 23:08:30.231625 kernel: acpiphp: Slot [15] registered Jul 6 23:08:30.231643 kernel: acpiphp: Slot [16] registered Jul 6 23:08:30.231661 kernel: acpiphp: Slot [17] registered Jul 6 23:08:30.231680 kernel: acpiphp: Slot [18] registered Jul 6 23:08:30.231699 kernel: acpiphp: Slot [19] registered Jul 6 23:08:30.231718 kernel: acpiphp: Slot [20] registered Jul 6 23:08:30.231736 kernel: acpiphp: Slot [21] registered Jul 6 23:08:30.231755 kernel: acpiphp: Slot [22] registered Jul 6 23:08:30.231773 kernel: acpiphp: Slot [23] registered Jul 6 23:08:30.231795 kernel: acpiphp: Slot [24] registered Jul 6 23:08:30.231814 kernel: acpiphp: Slot [25] registered Jul 6 23:08:30.231832 kernel: acpiphp: Slot [26] registered Jul 6 23:08:30.231850 kernel: acpiphp: Slot [27] registered Jul 6 23:08:30.231868 kernel: acpiphp: Slot [28] registered Jul 6 23:08:30.231886 kernel: acpiphp: Slot [29] registered Jul 6 23:08:30.231904 kernel: acpiphp: Slot [30] registered Jul 6 23:08:30.231923 kernel: acpiphp: Slot [31] registered Jul 6 23:08:30.231941 kernel: PCI host bridge to bus 0000:00 Jul 6 23:08:30.232189 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 6 23:08:30.232380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:08:30.232632 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 6 23:08:30.232820 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 6 23:08:30.233062 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 6 23:08:30.233389 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 6 23:08:30.233661 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 6 23:08:30.234123 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 6 23:08:30.234343 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 6 23:08:30.234612 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:08:30.234843 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 6 23:08:30.235052 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 6 23:08:30.235327 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 6 23:08:30.235589 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 6 23:08:30.235798 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:08:30.236008 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 6 23:08:30.236213 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 6 23:08:30.236419 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 6 23:08:30.237124 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 6 23:08:30.237337 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 6 23:08:30.239648 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 6 23:08:30.239856 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:08:30.240040 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 6 23:08:30.240066 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:08:30.240086 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:08:30.240105 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:08:30.240124 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:08:30.240142 kernel: iommu: Default domain type: Translated Jul 6 23:08:30.240170 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:08:30.240189 kernel: efivars: Registered efivars operations Jul 6 23:08:30.240208 kernel: vgaarb: loaded Jul 6 23:08:30.240226 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:08:30.240245 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:08:30.240264 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:08:30.240282 kernel: pnp: PnP ACPI init Jul 6 23:08:30.240552 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 6 23:08:30.240590 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:08:30.240609 kernel: NET: Registered PF_INET protocol family Jul 6 23:08:30.240628 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:08:30.240647 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:08:30.240666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:08:30.240685 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:08:30.240703 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:08:30.240722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:08:30.240741 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:08:30.240763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:08:30.240782 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:08:30.240800 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:08:30.240819 kernel: kvm [1]: HYP mode not available Jul 6 23:08:30.240837 kernel: Initialise system trusted keyrings Jul 6 23:08:30.240856 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:08:30.240874 kernel: Key type asymmetric registered Jul 6 23:08:30.240893 kernel: Asymmetric key parser 'x509' registered Jul 6 23:08:30.240911 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:08:30.240933 kernel: io scheduler mq-deadline registered Jul 6 23:08:30.240951 kernel: io scheduler kyber registered Jul 6 23:08:30.240969 kernel: io scheduler bfq registered Jul 6 23:08:30.241196 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 6 23:08:30.241223 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:08:30.241242 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:08:30.241261 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 6 23:08:30.241279 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:08:30.241298 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:08:30.241323 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:08:30.243192 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 6 23:08:30.243424 kernel: printk: console [ttyS0] disabled Jul 6 23:08:30.243685 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 6 23:08:30.243808 kernel: printk: console [ttyS0] enabled Jul 6 23:08:30.244164 kernel: printk: bootconsole [uart0] disabled Jul 6 23:08:30.244283 kernel: thunder_xcv, ver 1.0 Jul 6 23:08:30.246082 kernel: thunder_bgx, ver 1.0 Jul 6 23:08:30.246114 kernel: nicpf, ver 1.0 Jul 6 23:08:30.246142 kernel: nicvf, ver 1.0 Jul 6 23:08:30.246419 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:08:30.246731 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:08:29 UTC (1751843309) Jul 6 23:08:30.246761 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:08:30.246781 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 6 23:08:30.246802 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:08:30.246822 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:08:30.246842 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:08:30.246871 kernel: Segment Routing with IPv6 Jul 6 23:08:30.246890 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:08:30.246908 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:08:30.246927 kernel: Key type dns_resolver registered Jul 6 23:08:30.246947 kernel: registered taskstats version 1 Jul 6 23:08:30.246965 kernel: Loading compiled-in X.509 certificates Jul 6 23:08:30.246985 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:08:30.247004 kernel: Key type .fscrypt registered Jul 6 23:08:30.247023 kernel: Key type fscrypt-provisioning registered Jul 6 23:08:30.247046 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:08:30.247066 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:08:30.247084 kernel: ima: No architecture policies found Jul 6 23:08:30.247104 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:08:30.247123 kernel: clk: Disabling unused clocks Jul 6 23:08:30.247142 kernel: Freeing unused kernel memory: 38336K Jul 6 23:08:30.247161 kernel: Run /init as init process Jul 6 23:08:30.247179 kernel: with arguments: Jul 6 23:08:30.247197 kernel: /init Jul 6 23:08:30.247219 kernel: with environment: Jul 6 23:08:30.247238 kernel: HOME=/ Jul 6 23:08:30.247256 kernel: TERM=linux Jul 6 23:08:30.247274 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:08:30.247295 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:08:30.247320 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:08:30.247341 systemd[1]: Detected virtualization amazon. Jul 6 23:08:30.247365 systemd[1]: Detected architecture arm64. Jul 6 23:08:30.247385 systemd[1]: Running in initrd. Jul 6 23:08:30.247404 systemd[1]: No hostname configured, using default hostname. Jul 6 23:08:30.247425 systemd[1]: Hostname set to . Jul 6 23:08:30.247445 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:08:30.247495 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:08:30.249539 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:08:30.249584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:08:30.249607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:08:30.249638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:08:30.249659 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:08:30.249681 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:08:30.249704 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:08:30.249726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:08:30.249746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:08:30.249771 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:08:30.249791 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:08:30.249812 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:08:30.249832 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:08:30.249853 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:08:30.249873 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:08:30.249893 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:08:30.249913 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:08:30.249934 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:08:30.249958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:08:30.249978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:08:30.249999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:08:30.250019 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:08:30.250039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:08:30.250059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:08:30.250080 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:08:30.250100 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:08:30.250120 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:08:30.250144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:08:30.250165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:08:30.250185 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:08:30.250205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:08:30.250226 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:08:30.250308 systemd-journald[252]: Collecting audit messages is disabled. Jul 6 23:08:30.250352 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:08:30.250373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:08:30.250400 systemd-journald[252]: Journal started Jul 6 23:08:30.250437 systemd-journald[252]: Runtime Journal (/run/log/journal/ec238e94f071c8708493f1862ea074c6) is 8M, max 75.3M, 67.3M free. Jul 6 23:08:30.231570 systemd-modules-load[253]: Inserted module 'overlay' Jul 6 23:08:30.262875 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:08:30.269522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:08:30.273507 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:08:30.272847 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:08:30.284540 kernel: Bridge firewalling registered Jul 6 23:08:30.284434 systemd-modules-load[253]: Inserted module 'br_netfilter' Jul 6 23:08:30.295786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:08:30.309744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:08:30.310895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:08:30.323734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:08:30.344912 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:08:30.364755 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:08:30.375078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:08:30.380936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:08:30.390285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:08:30.405837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:08:30.417916 dracut-cmdline[283]: dracut-dracut-053 Jul 6 23:08:30.424705 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:08:30.507153 systemd-resolved[292]: Positive Trust Anchors: Jul 6 23:08:30.507783 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:08:30.507848 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:08:30.611243 kernel: SCSI subsystem initialized Jul 6 23:08:30.618591 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:08:30.630585 kernel: iscsi: registered transport (tcp) Jul 6 23:08:30.652949 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:08:30.653035 kernel: QLogic iSCSI HBA Driver Jul 6 23:08:30.732491 kernel: random: crng init done Jul 6 23:08:30.733323 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 6 23:08:30.736173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:08:30.742388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:08:30.770199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:08:30.785789 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:08:30.819882 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:08:30.819970 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:08:30.819997 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:08:30.886519 kernel: raid6: neonx8 gen() 6616 MB/s Jul 6 23:08:30.903498 kernel: raid6: neonx4 gen() 6566 MB/s Jul 6 23:08:30.920500 kernel: raid6: neonx2 gen() 5446 MB/s Jul 6 23:08:30.937498 kernel: raid6: neonx1 gen() 3955 MB/s Jul 6 23:08:30.954499 kernel: raid6: int64x8 gen() 3644 MB/s Jul 6 23:08:30.971498 kernel: raid6: int64x4 gen() 3722 MB/s Jul 6 23:08:30.988498 kernel: raid6: int64x2 gen() 3624 MB/s Jul 6 23:08:31.006488 kernel: raid6: int64x1 gen() 2774 MB/s Jul 6 23:08:31.006520 kernel: raid6: using algorithm neonx8 gen() 6616 MB/s Jul 6 23:08:31.025501 kernel: raid6: .... xor() 4776 MB/s, rmw enabled Jul 6 23:08:31.025544 kernel: raid6: using neon recovery algorithm Jul 6 23:08:31.033998 kernel: xor: measuring software checksum speed Jul 6 23:08:31.034057 kernel: 8regs : 12933 MB/sec Jul 6 23:08:31.035178 kernel: 32regs : 13044 MB/sec Jul 6 23:08:31.037497 kernel: arm64_neon : 9047 MB/sec Jul 6 23:08:31.037530 kernel: xor: using function: 32regs (13044 MB/sec) Jul 6 23:08:31.120513 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:08:31.138233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:08:31.154756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:08:31.201400 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 6 23:08:31.210982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:08:31.226395 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:08:31.256459 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jul 6 23:08:31.312916 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:08:31.332394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:08:31.456575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:08:31.473808 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:08:31.527158 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:08:31.533969 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:08:31.540288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:08:31.543339 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:08:31.557172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:08:31.589141 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:08:31.671255 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:08:31.671318 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 6 23:08:31.679306 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:08:31.679685 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:08:31.689456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:08:31.689739 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:08:31.701114 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:08:31.715348 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:9f:43:2a:42:e7 Jul 6 23:08:31.704330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:08:31.704631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:08:31.707198 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:08:31.710732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:08:31.748569 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:08:31.735349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:08:31.739537 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:08:31.761580 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:08:31.773038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:08:31.781886 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:08:31.784790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:08:31.799670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:08:31.799738 kernel: GPT:9289727 != 16777215 Jul 6 23:08:31.801011 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:08:31.801839 kernel: GPT:9289727 != 16777215 Jul 6 23:08:31.802991 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:08:31.803936 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:08:31.819283 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:08:31.895830 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (522) Jul 6 23:08:31.918507 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (530) Jul 6 23:08:31.968723 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:08:32.027904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:08:32.068637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:08:32.075219 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:08:32.103855 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:08:32.131204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:08:32.143293 disk-uuid[661]: Primary Header is updated. Jul 6 23:08:32.143293 disk-uuid[661]: Secondary Entries is updated. Jul 6 23:08:32.143293 disk-uuid[661]: Secondary Header is updated. Jul 6 23:08:32.158524 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:08:33.175502 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:08:33.176296 disk-uuid[662]: The operation has completed successfully. Jul 6 23:08:33.384658 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:08:33.384885 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:08:33.463725 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:08:33.475081 sh[923]: Success Jul 6 23:08:33.499615 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:08:33.640061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:08:33.643660 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:08:33.654321 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:08:33.689117 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:08:33.689188 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:08:33.689215 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:08:33.690999 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:08:33.693592 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:08:33.719501 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:08:33.735847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:08:33.741286 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:08:33.758733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:08:33.769514 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:08:33.808528 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:08:33.808895 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:08:33.810305 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:08:33.827544 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:08:33.836553 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:08:33.840089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:08:33.856820 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:08:33.967030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:08:33.986865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:08:34.062550 systemd-networkd[1128]: lo: Link UP Jul 6 23:08:34.063102 systemd-networkd[1128]: lo: Gained carrier Jul 6 23:08:34.066599 systemd-networkd[1128]: Enumeration completed Jul 6 23:08:34.066747 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:08:34.067668 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:08:34.067675 systemd-networkd[1128]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:08:34.087760 systemd[1]: Reached target network.target - Network. Jul 6 23:08:34.093962 systemd-networkd[1128]: eth0: Link UP Jul 6 23:08:34.095523 systemd-networkd[1128]: eth0: Gained carrier Jul 6 23:08:34.095547 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:08:34.108094 ignition[1025]: Ignition 2.20.0 Jul 6 23:08:34.108540 ignition[1025]: Stage: fetch-offline Jul 6 23:08:34.109559 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:34.109583 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:34.110133 ignition[1025]: Ignition finished successfully Jul 6 23:08:34.123527 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:08:34.135674 systemd-networkd[1128]: eth0: DHCPv4 address 172.31.22.108/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:08:34.141836 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:08:34.168077 ignition[1137]: Ignition 2.20.0 Jul 6 23:08:34.168097 ignition[1137]: Stage: fetch Jul 6 23:08:34.169175 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:34.169201 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:34.169791 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:34.185644 ignition[1137]: PUT result: OK Jul 6 23:08:34.188826 ignition[1137]: parsed url from cmdline: "" Jul 6 23:08:34.188843 ignition[1137]: no config URL provided Jul 6 23:08:34.188860 ignition[1137]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:08:34.188886 ignition[1137]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:08:34.188917 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:34.195967 ignition[1137]: PUT result: OK Jul 6 23:08:34.198623 ignition[1137]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:08:34.200702 ignition[1137]: GET result: OK Jul 6 23:08:34.201001 ignition[1137]: parsing config with SHA512: 88d29c7afcbd3c94d8e38769ea407d06342fd4fe9a2ffce9c36961d00945c2f0a962f88dc5155c2ab81b34ba45c89bc7d09c1b709fbd586f3779a31b8b9b27dc Jul 6 23:08:34.211122 unknown[1137]: fetched base config from "system" Jul 6 23:08:34.211855 unknown[1137]: fetched base config from "system" Jul 6 23:08:34.211908 unknown[1137]: fetched user config from "aws" Jul 6 23:08:34.214539 ignition[1137]: fetch: fetch complete Jul 6 23:08:34.214550 ignition[1137]: fetch: fetch passed Jul 6 23:08:34.219912 ignition[1137]: Ignition finished successfully Jul 6 23:08:34.228533 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:08:34.237763 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:08:34.268066 ignition[1144]: Ignition 2.20.0 Jul 6 23:08:34.268587 ignition[1144]: Stage: kargs Jul 6 23:08:34.269204 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:34.269229 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:34.269375 ignition[1144]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:34.279666 ignition[1144]: PUT result: OK Jul 6 23:08:34.284310 ignition[1144]: kargs: kargs passed Jul 6 23:08:34.284444 ignition[1144]: Ignition finished successfully Jul 6 23:08:34.291079 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:08:34.305866 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:08:34.331411 ignition[1150]: Ignition 2.20.0 Jul 6 23:08:34.331920 ignition[1150]: Stage: disks Jul 6 23:08:34.332594 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:34.332619 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:34.332809 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:34.342991 ignition[1150]: PUT result: OK Jul 6 23:08:34.347635 ignition[1150]: disks: disks passed Jul 6 23:08:34.347732 ignition[1150]: Ignition finished successfully Jul 6 23:08:34.352493 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:08:34.353585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:08:34.363065 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:08:34.365981 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:08:34.368812 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:08:34.371517 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:08:34.387185 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:08:34.448137 systemd-fsck[1159]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:08:34.454826 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:08:34.467008 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:08:34.542508 kernel: EXT4-fs (nvme0n1p9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:08:34.543567 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:08:34.544374 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:08:34.562649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:08:34.569341 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:08:34.580579 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:08:34.603022 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1178) Jul 6 23:08:34.603061 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:08:34.603087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:08:34.603113 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:08:34.580695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:08:34.580750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:08:34.612605 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:08:34.618991 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:08:34.632525 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:08:34.635589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:08:34.738400 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:08:34.749748 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:08:34.758510 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:08:34.766957 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:08:34.913968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:08:34.926113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:08:34.935493 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:08:34.949850 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:08:34.953927 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:08:34.992058 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:08:35.002924 ignition[1292]: INFO : Ignition 2.20.0 Jul 6 23:08:35.002924 ignition[1292]: INFO : Stage: mount Jul 6 23:08:35.002924 ignition[1292]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:35.002924 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:35.002924 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:35.002924 ignition[1292]: INFO : PUT result: OK Jul 6 23:08:35.028550 ignition[1292]: INFO : mount: mount passed Jul 6 23:08:35.028550 ignition[1292]: INFO : Ignition finished successfully Jul 6 23:08:35.008426 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:08:35.028218 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:08:35.051918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:08:35.083351 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1303) Jul 6 23:08:35.083414 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:08:35.083440 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:08:35.086283 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:08:35.091510 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:08:35.094813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:08:35.127999 ignition[1320]: INFO : Ignition 2.20.0 Jul 6 23:08:35.130155 ignition[1320]: INFO : Stage: files Jul 6 23:08:35.130155 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:35.130155 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:35.130155 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:35.139586 systemd-networkd[1128]: eth0: Gained IPv6LL Jul 6 23:08:35.148190 ignition[1320]: INFO : PUT result: OK Jul 6 23:08:35.153460 ignition[1320]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:08:35.156970 ignition[1320]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:08:35.156970 ignition[1320]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:08:35.168759 ignition[1320]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:08:35.174715 ignition[1320]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:08:35.178733 unknown[1320]: wrote ssh authorized keys file for user: core Jul 6 23:08:35.181337 ignition[1320]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:08:35.185305 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:08:35.185305 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:08:35.286853 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:08:35.474695 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:08:35.474695 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:08:35.484671 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:08:35.811268 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:08:35.923864 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:08:35.927898 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:08:36.493715 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:08:36.844194 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:08:36.844194 ignition[1320]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:08:36.852359 ignition[1320]: INFO : files: files passed Jul 6 23:08:36.852359 ignition[1320]: INFO : Ignition finished successfully Jul 6 23:08:36.886729 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:08:36.897877 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:08:36.907977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:08:36.915902 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:08:36.916365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:08:36.949244 initrd-setup-root-after-ignition[1348]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:08:36.949244 initrd-setup-root-after-ignition[1348]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:08:36.959653 initrd-setup-root-after-ignition[1352]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:08:36.966559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:08:36.973129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:08:36.991780 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:08:37.045828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:08:37.046025 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:08:37.049332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:08:37.053528 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:08:37.056084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:08:37.057807 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:08:37.103999 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:08:37.115828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:08:37.142112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:08:37.150064 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:08:37.154832 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:08:37.162393 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:08:37.162704 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:08:37.166319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:08:37.169246 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:08:37.171859 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:08:37.174939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:08:37.178076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:08:37.181210 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:08:37.186218 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:08:37.189615 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:08:37.217449 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:08:37.220356 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:08:37.224533 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:08:37.224763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:08:37.232383 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:08:37.240003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:08:37.243081 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:08:37.248546 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:08:37.252043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:08:37.252326 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:08:37.262457 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:08:37.262899 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:08:37.271880 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:08:37.272088 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:08:37.284813 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:08:37.292794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:08:37.296927 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:08:37.297251 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:08:37.304248 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:08:37.304507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:08:37.333401 ignition[1372]: INFO : Ignition 2.20.0 Jul 6 23:08:37.333401 ignition[1372]: INFO : Stage: umount Jul 6 23:08:37.340342 ignition[1372]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:08:37.340342 ignition[1372]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:08:37.340342 ignition[1372]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:08:37.340342 ignition[1372]: INFO : PUT result: OK Jul 6 23:08:37.357885 ignition[1372]: INFO : umount: umount passed Jul 6 23:08:37.357885 ignition[1372]: INFO : Ignition finished successfully Jul 6 23:08:37.366113 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:08:37.371580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:08:37.378144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:08:37.379116 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:08:37.379340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:08:37.390978 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:08:37.391158 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:08:37.395663 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:08:37.395769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:08:37.410690 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:08:37.411649 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:08:37.418153 systemd[1]: Stopped target network.target - Network. Jul 6 23:08:37.420441 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:08:37.420565 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:08:37.423659 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:08:37.425978 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:08:37.430627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:08:37.440917 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:08:37.445997 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:08:37.451923 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:08:37.452007 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:08:37.454620 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:08:37.454688 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:08:37.457351 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:08:37.457445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:08:37.460093 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:08:37.460172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:08:37.463103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:08:37.465928 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:08:37.480979 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:08:37.481172 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:08:37.488783 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:08:37.489322 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:08:37.489554 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:08:37.527648 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:08:37.528124 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:08:37.528299 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:08:37.533918 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:08:37.534037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:08:37.548417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:08:37.548549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:08:37.564400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:08:37.566799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:08:37.566914 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:08:37.570163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:08:37.570247 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:08:37.578785 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:08:37.578997 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:08:37.594904 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:08:37.594999 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:08:37.604101 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:08:37.616135 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:08:37.616868 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:08:37.627092 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:08:37.627415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:08:37.632563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:08:37.632694 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:08:37.645164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:08:37.645245 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:08:37.648045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:08:37.648140 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:08:37.651185 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:08:37.651273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:08:37.668753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:08:37.668849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:08:37.673674 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:08:37.684884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:08:37.685181 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:08:37.694660 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:08:37.694763 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:08:37.697866 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:08:37.697955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:08:37.712929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:08:37.713031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:08:37.723441 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:08:37.723809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:08:37.724713 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:08:37.724995 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:08:37.743557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:08:37.743988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:08:37.749956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:08:37.765744 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:08:37.781607 systemd[1]: Switching root. Jul 6 23:08:37.820530 systemd-journald[252]: Journal stopped Jul 6 23:08:39.901837 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jul 6 23:08:39.902400 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:08:39.904573 kernel: SELinux: policy capability open_perms=1 Jul 6 23:08:39.904615 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:08:39.904646 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:08:39.904681 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:08:39.904711 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:08:39.904739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:08:39.904769 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:08:39.904801 kernel: audit: type=1403 audit(1751843318.087:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:08:39.904850 systemd[1]: Successfully loaded SELinux policy in 49.709ms. Jul 6 23:08:39.904896 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.206ms. Jul 6 23:08:39.904929 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:08:39.904960 systemd[1]: Detected virtualization amazon. Jul 6 23:08:39.904993 systemd[1]: Detected architecture arm64. Jul 6 23:08:39.905024 systemd[1]: Detected first boot. Jul 6 23:08:39.905053 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:08:39.905084 zram_generator::config[1417]: No configuration found. Jul 6 23:08:39.905117 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:08:39.905147 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:08:39.905179 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:08:39.905209 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:08:39.905242 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:08:39.905273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:08:39.905303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:08:39.905336 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:08:39.905366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:08:39.905397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:08:39.905428 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:08:39.905457 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:08:39.907575 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:08:39.907617 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:08:39.907648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:08:39.907679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:08:39.907712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:08:39.907742 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:08:39.907774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:08:39.907804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:08:39.907835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:08:39.907868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:08:39.907898 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:08:39.907932 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:08:39.907962 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:08:39.907994 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:08:39.908023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:08:39.908056 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:08:39.908088 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:08:39.908124 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:08:39.908154 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:08:39.908185 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:08:39.908223 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:08:39.908253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:08:39.908288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:08:39.908321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:08:39.908352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:08:39.908382 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:08:39.908419 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:08:39.908449 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:08:39.908527 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:08:39.908564 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:08:39.908597 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:08:39.908628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:08:39.908657 systemd[1]: Reached target machines.target - Containers. Jul 6 23:08:39.908701 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:08:39.908732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:08:39.908770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:08:39.908802 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:08:39.908833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:08:39.908865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:08:39.908899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:08:39.908929 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:08:39.908960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:08:39.908990 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:08:39.909025 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:08:39.909056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:08:39.909086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:08:39.909117 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:08:39.909146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:08:39.909175 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:08:39.909203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:08:39.909231 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:08:39.909261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:08:39.909294 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:08:39.909323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:08:39.909353 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:08:39.909384 kernel: fuse: init (API version 7.39) Jul 6 23:08:39.909423 systemd[1]: Stopped verity-setup.service. Jul 6 23:08:39.909454 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:08:39.911565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:08:39.911610 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:08:39.911643 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:08:39.911672 kernel: loop: module loaded Jul 6 23:08:39.911701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:08:39.911732 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:08:39.911770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:08:39.911802 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:08:39.911833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:08:39.911862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:08:39.911892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:08:39.911921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:08:39.911951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:08:39.911985 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:08:39.912014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:08:39.912044 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:08:39.912072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:08:39.912101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:08:39.912132 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:08:39.912165 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:08:39.912193 kernel: ACPI: bus type drm_connector registered Jul 6 23:08:39.912275 systemd-journald[1507]: Collecting audit messages is disabled. Jul 6 23:08:39.912330 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:08:39.912361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:08:39.912392 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:08:39.912424 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:08:39.912454 systemd-journald[1507]: Journal started Jul 6 23:08:39.912553 systemd-journald[1507]: Runtime Journal (/run/log/journal/ec238e94f071c8708493f1862ea074c6) is 8M, max 75.3M, 67.3M free. Jul 6 23:08:39.919002 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:08:39.196269 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:08:39.211162 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:08:39.212016 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:08:39.933007 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:08:39.936379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:08:39.949534 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:08:39.956499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:08:39.969787 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:08:39.979602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:08:39.979691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:08:39.996001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:08:40.020364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:08:40.020496 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:08:40.027602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:08:40.032446 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:08:40.033625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:08:40.038583 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:08:40.042849 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:08:40.046913 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:08:40.050604 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:08:40.054239 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:08:40.080643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:08:40.132265 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:08:40.135714 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:08:40.149973 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:08:40.167377 kernel: loop0: detected capacity change from 0 to 113512 Jul 6 23:08:40.167566 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:08:40.207849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:08:40.219566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:08:40.231215 systemd-journald[1507]: Time spent on flushing to /var/log/journal/ec238e94f071c8708493f1862ea074c6 is 65.298ms for 929 entries. Jul 6 23:08:40.231215 systemd-journald[1507]: System Journal (/var/log/journal/ec238e94f071c8708493f1862ea074c6) is 8M, max 195.6M, 187.6M free. Jul 6 23:08:40.323083 systemd-journald[1507]: Received client request to flush runtime journal. Jul 6 23:08:40.324284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:08:40.324377 kernel: loop1: detected capacity change from 0 to 207008 Jul 6 23:08:40.236892 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:08:40.312777 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jul 6 23:08:40.312805 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jul 6 23:08:40.331394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:08:40.340432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:08:40.344895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:08:40.360805 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:08:40.371743 udevadm[1564]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:08:40.383023 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:08:40.415547 kernel: loop2: detected capacity change from 0 to 123192 Jul 6 23:08:40.472878 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:08:40.488608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:08:40.503216 kernel: loop3: detected capacity change from 0 to 53784 Jul 6 23:08:40.555394 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jul 6 23:08:40.555426 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jul 6 23:08:40.573581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:08:40.630911 kernel: loop4: detected capacity change from 0 to 113512 Jul 6 23:08:40.659513 kernel: loop5: detected capacity change from 0 to 207008 Jul 6 23:08:40.702508 kernel: loop6: detected capacity change from 0 to 123192 Jul 6 23:08:40.730628 kernel: loop7: detected capacity change from 0 to 53784 Jul 6 23:08:40.766211 (sd-merge)[1583]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:08:40.769911 (sd-merge)[1583]: Merged extensions into '/usr'. Jul 6 23:08:40.780552 systemd[1]: Reload requested from client PID 1533 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:08:40.780582 systemd[1]: Reloading... Jul 6 23:08:40.969503 zram_generator::config[1611]: No configuration found. Jul 6 23:08:41.001505 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:08:41.249025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:08:41.409630 systemd[1]: Reloading finished in 627 ms. Jul 6 23:08:41.432512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:08:41.435965 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:08:41.439923 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:08:41.455744 systemd[1]: Starting ensure-sysext.service... Jul 6 23:08:41.466808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:08:41.474795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:08:41.506495 systemd[1]: Reload requested from client PID 1664 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:08:41.506535 systemd[1]: Reloading... Jul 6 23:08:41.569510 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:08:41.571097 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:08:41.574272 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:08:41.577172 systemd-tmpfiles[1665]: ACLs are not supported, ignoring. Jul 6 23:08:41.579722 systemd-tmpfiles[1665]: ACLs are not supported, ignoring. Jul 6 23:08:41.584386 systemd-udevd[1666]: Using default interface naming scheme 'v255'. Jul 6 23:08:41.592331 systemd-tmpfiles[1665]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:08:41.592363 systemd-tmpfiles[1665]: Skipping /boot Jul 6 23:08:41.634841 systemd-tmpfiles[1665]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:08:41.634872 systemd-tmpfiles[1665]: Skipping /boot Jul 6 23:08:41.728512 zram_generator::config[1695]: No configuration found. Jul 6 23:08:42.047327 (udev-worker)[1710]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:08:42.135494 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1728) Jul 6 23:08:42.137368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:08:42.357372 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:08:42.359204 systemd[1]: Reloading finished in 852 ms. Jul 6 23:08:42.380638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:08:42.386740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:08:42.457526 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:08:42.468693 systemd[1]: Finished ensure-sysext.service. Jul 6 23:08:42.526836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:08:42.545735 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:08:42.552794 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:08:42.561052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:08:42.570374 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:08:42.579949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:08:42.592305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:08:42.601718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:08:42.625811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:08:42.629721 lvm[1866]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:08:42.630191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:08:42.635526 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:08:42.640027 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:08:42.643798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:08:42.653950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:08:42.670721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:08:42.675618 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:08:42.686685 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:08:42.692518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:08:42.699449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:08:42.703556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:08:42.709315 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:08:42.710915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:08:42.717648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:08:42.718078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:08:42.722803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:08:42.724909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:08:42.747919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:08:42.748026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:08:42.761069 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:08:42.811350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:08:42.819760 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:08:42.825446 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:08:42.843231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:08:42.861968 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:08:42.876007 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:08:42.886663 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:08:42.906006 lvm[1905]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:08:42.908746 augenrules[1909]: No rules Jul 6 23:08:42.916195 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:08:42.916745 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:08:42.920593 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:08:42.924724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:08:42.952436 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:08:42.964180 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:08:42.993599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:08:42.998452 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:08:43.125432 systemd-networkd[1879]: lo: Link UP Jul 6 23:08:43.126024 systemd-networkd[1879]: lo: Gained carrier Jul 6 23:08:43.128517 systemd-resolved[1880]: Positive Trust Anchors: Jul 6 23:08:43.128551 systemd-resolved[1880]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:08:43.128615 systemd-resolved[1880]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:08:43.129905 systemd-networkd[1879]: Enumeration completed Jul 6 23:08:43.130087 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:08:43.137515 systemd-networkd[1879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:08:43.137540 systemd-networkd[1879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:08:43.143257 systemd-resolved[1880]: Defaulting to hostname 'linux'. Jul 6 23:08:43.143939 systemd-networkd[1879]: eth0: Link UP Jul 6 23:08:43.144866 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:08:43.149133 systemd-networkd[1879]: eth0: Gained carrier Jul 6 23:08:43.149183 systemd-networkd[1879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:08:43.168867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:08:43.171870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:08:43.175358 systemd[1]: Reached target network.target - Network. Jul 6 23:08:43.177651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:08:43.180774 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:08:43.183644 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:08:43.186894 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:08:43.190391 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:08:43.193320 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:08:43.196638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:08:43.199995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:08:43.200040 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:08:43.205657 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:08:43.208228 systemd-networkd[1879]: eth0: DHCPv4 address 172.31.22.108/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:08:43.210258 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:08:43.217115 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:08:43.226880 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:08:43.230984 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:08:43.235597 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:08:43.246850 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:08:43.249969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:08:43.254898 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:08:43.258897 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:08:43.262856 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:08:43.265744 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:08:43.268151 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:08:43.268206 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:08:43.273697 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:08:43.282801 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:08:43.290139 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:08:43.296773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:08:43.316781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:08:43.322522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:08:43.326193 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:08:43.336804 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:08:43.348715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:08:43.358190 jq[1937]: false Jul 6 23:08:43.365823 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:08:43.374394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:08:43.383503 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:08:43.406177 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:08:43.411110 extend-filesystems[1938]: Found loop4 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found loop5 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found loop6 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found loop7 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p1 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p2 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p3 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found usr Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p4 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p6 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p7 Jul 6 23:08:43.416091 extend-filesystems[1938]: Found nvme0n1p9 Jul 6 23:08:43.416091 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Jul 6 23:08:43.480701 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Jul 6 23:08:43.425316 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:08:43.438206 dbus-daemon[1936]: [system] SELinux support is enabled Jul 6 23:08:43.485458 extend-filesystems[1955]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:08:43.427542 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:08:43.470662 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1879 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:08:43.452664 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:08:43.467870 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:08:43.495104 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:08:43.516690 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:08:43.516274 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:08:43.518951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:08:43.535313 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:08:43.536654 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:08:43.535381 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:08:43.539606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:08:43.539647 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:08:43.553827 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:08:43.571623 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:08:43.574599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:08:43.578835 jq[1956]: true Jul 6 23:08:43.620629 tar[1959]: linux-arm64/LICENSE Jul 6 23:08:43.633405 tar[1959]: linux-arm64/helm Jul 6 23:08:43.633907 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:08:43.636320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:08:43.646150 jq[1970]: true Jul 6 23:08:43.668508 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:08:43.702544 (ntainerd)[1983]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:08:43.711349 extend-filesystems[1955]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:08:43.711349 extend-filesystems[1955]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:08:43.711349 extend-filesystems[1955]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:08:43.733500 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:08:43.726624 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:08:43.732881 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:08:43.755860 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:45 UTC 2025 (1): Starting Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:45 UTC 2025 (1): Starting Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: ---------------------------------------------------- Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: corporation. Support and training for ntp-4 are Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: available at https://www.nwtime.org/support Jul 6 23:08:43.758547 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: ---------------------------------------------------- Jul 6 23:08:43.755918 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:08:43.755938 ntpd[1942]: ---------------------------------------------------- Jul 6 23:08:43.755956 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:08:43.755974 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:08:43.755992 ntpd[1942]: corporation. Support and training for ntp-4 are Jul 6 23:08:43.756011 ntpd[1942]: available at https://www.nwtime.org/support Jul 6 23:08:43.756028 ntpd[1942]: ---------------------------------------------------- Jul 6 23:08:43.761811 ntpd[1942]: proto: precision = 0.096 usec (-23) Jul 6 23:08:43.763508 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: proto: precision = 0.096 usec (-23) Jul 6 23:08:43.763508 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: basedate set to 2025-06-24 Jul 6 23:08:43.763508 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: gps base set to 2025-06-29 (week 2373) Jul 6 23:08:43.763028 ntpd[1942]: basedate set to 2025-06-24 Jul 6 23:08:43.763054 ntpd[1942]: gps base set to 2025-06-29 (week 2373) Jul 6 23:08:43.765869 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:08:43.766022 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:08:43.766130 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:08:43.766231 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:08:43.766612 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listen normally on 3 eth0 172.31.22.108:123 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: bind(21) AF_INET6 fe80::49f:43ff:fe2a:42e7%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: unable to create socket on eth0 (5) for fe80::49f:43ff:fe2a:42e7%2#123 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: failed to init interface for address fe80::49f:43ff:fe2a:42e7%2 Jul 6 23:08:43.767505 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Jul 6 23:08:43.766779 ntpd[1942]: Listen normally on 3 eth0 172.31.22.108:123 Jul 6 23:08:43.766845 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jul 6 23:08:43.766919 ntpd[1942]: bind(21) AF_INET6 fe80::49f:43ff:fe2a:42e7%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:08:43.766957 ntpd[1942]: unable to create socket on eth0 (5) for fe80::49f:43ff:fe2a:42e7%2#123 Jul 6 23:08:43.766984 ntpd[1942]: failed to init interface for address fe80::49f:43ff:fe2a:42e7%2 Jul 6 23:08:43.767037 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Jul 6 23:08:43.772765 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:08:43.772967 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:08:43.773062 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:08:43.773210 ntpd[1942]: 6 Jul 23:08:43 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:08:43.774365 coreos-metadata[1935]: Jul 06 23:08:43.773 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:08:43.776072 coreos-metadata[1935]: Jul 06 23:08:43.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:08:43.778918 coreos-metadata[1935]: Jul 06 23:08:43.778 INFO Fetch successful Jul 6 23:08:43.778918 coreos-metadata[1935]: Jul 06 23:08:43.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:08:43.779650 coreos-metadata[1935]: Jul 06 23:08:43.779 INFO Fetch successful Jul 6 23:08:43.779959 coreos-metadata[1935]: Jul 06 23:08:43.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:08:43.780637 coreos-metadata[1935]: Jul 06 23:08:43.780 INFO Fetch successful Jul 6 23:08:43.780854 coreos-metadata[1935]: Jul 06 23:08:43.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:08:43.782727 coreos-metadata[1935]: Jul 06 23:08:43.782 INFO Fetch successful Jul 6 23:08:43.783080 coreos-metadata[1935]: Jul 06 23:08:43.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:08:43.783990 coreos-metadata[1935]: Jul 06 23:08:43.783 INFO Fetch failed with 404: resource not found Jul 6 23:08:43.784325 coreos-metadata[1935]: Jul 06 23:08:43.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:08:43.784935 coreos-metadata[1935]: Jul 06 23:08:43.784 INFO Fetch successful Jul 6 23:08:43.785261 coreos-metadata[1935]: Jul 06 23:08:43.785 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:08:43.785898 coreos-metadata[1935]: Jul 06 23:08:43.785 INFO Fetch successful Jul 6 23:08:43.786210 coreos-metadata[1935]: Jul 06 23:08:43.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:08:43.786979 coreos-metadata[1935]: Jul 06 23:08:43.786 INFO Fetch successful Jul 6 23:08:43.787307 coreos-metadata[1935]: Jul 06 23:08:43.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:08:43.787942 coreos-metadata[1935]: Jul 06 23:08:43.787 INFO Fetch successful Jul 6 23:08:43.788316 coreos-metadata[1935]: Jul 06 23:08:43.788 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:08:43.789344 coreos-metadata[1935]: Jul 06 23:08:43.788 INFO Fetch successful Jul 6 23:08:43.821193 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:08:43.855699 update_engine[1951]: I20250706 23:08:43.855422 1951 main.cc:92] Flatcar Update Engine starting Jul 6 23:08:43.882115 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:08:43.894158 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:08:43.904541 update_engine[1951]: I20250706 23:08:43.903697 1951 update_check_scheduler.cc:74] Next update check in 10m56s Jul 6 23:08:43.942581 bash[2017]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:08:43.964895 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1728) Jul 6 23:08:43.968736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:08:44.002843 systemd-logind[1948]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:08:44.003349 systemd-logind[1948]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 6 23:08:44.004171 systemd[1]: Starting sshkeys.service... Jul 6 23:08:44.005620 systemd-logind[1948]: New seat seat0. Jul 6 23:08:44.014512 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:08:44.020881 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:08:44.039555 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:08:44.061210 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:08:44.075081 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:08:44.304494 coreos-metadata[2034]: Jul 06 23:08:44.302 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:08:44.304494 coreos-metadata[2034]: Jul 06 23:08:44.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:08:44.312495 coreos-metadata[2034]: Jul 06 23:08:44.309 INFO Fetch successful Jul 6 23:08:44.312495 coreos-metadata[2034]: Jul 06 23:08:44.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:08:44.314954 coreos-metadata[2034]: Jul 06 23:08:44.314 INFO Fetch successful Jul 6 23:08:44.324922 unknown[2034]: wrote ssh authorized keys file for user: core Jul 6 23:08:44.489991 update-ssh-keys[2083]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:08:44.491573 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:08:44.509574 systemd[1]: Finished sshkeys.service. Jul 6 23:08:44.565623 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:08:44.570994 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:08:44.574121 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1968 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:08:44.611709 systemd-networkd[1879]: eth0: Gained IPv6LL Jul 6 23:08:44.622200 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:08:44.631801 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:08:44.640388 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:08:44.667177 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:08:44.676851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:44.688508 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:08:44.708803 locksmithd[2015]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:08:44.722726 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:08:44.734991 polkitd[2121]: Started polkitd version 121 Jul 6 23:08:44.807961 polkitd[2121]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:08:44.808070 polkitd[2121]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:08:44.817358 polkitd[2121]: Finished loading, compiling and executing 2 rules Jul 6 23:08:44.831281 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:08:44.834218 polkitd[2121]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:08:44.836422 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:08:44.877496 containerd[1983]: time="2025-07-06T23:08:44.874935420Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:08:44.892497 amazon-ssm-agent[2124]: Initializing new seelog logger Jul 6 23:08:44.892497 amazon-ssm-agent[2124]: New Seelog Logger Creation Complete Jul 6 23:08:44.892497 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.892497 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.893130 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 processing appconfig overrides Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 processing appconfig overrides Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.898721 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 processing appconfig overrides Jul 6 23:08:44.899149 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO Proxy environment variables: Jul 6 23:08:44.905169 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.905169 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:08:44.905320 amazon-ssm-agent[2124]: 2025/07/06 23:08:44 processing appconfig overrides Jul 6 23:08:44.910546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:08:44.937747 systemd-hostnamed[1968]: Hostname set to (transient) Jul 6 23:08:44.940140 systemd-resolved[1880]: System hostname changed to 'ip-172-31-22-108'. Jul 6 23:08:45.001093 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO no_proxy: Jul 6 23:08:45.070529 containerd[1983]: time="2025-07-06T23:08:45.070429809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.085574 containerd[1983]: time="2025-07-06T23:08:45.085247601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:08:45.085574 containerd[1983]: time="2025-07-06T23:08:45.085318749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:08:45.085574 containerd[1983]: time="2025-07-06T23:08:45.085380813Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:08:45.088970 containerd[1983]: time="2025-07-06T23:08:45.086018769Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:08:45.088970 containerd[1983]: time="2025-07-06T23:08:45.086062437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.088970 containerd[1983]: time="2025-07-06T23:08:45.086216289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:08:45.088970 containerd[1983]: time="2025-07-06T23:08:45.086244945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.089482 containerd[1983]: time="2025-07-06T23:08:45.089414421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.091530753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.091622253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.091654233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.091867221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.092267877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.092575629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.092610477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.092784981Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:08:45.093087 containerd[1983]: time="2025-07-06T23:08:45.092879853Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:08:45.101076 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO https_proxy: Jul 6 23:08:45.108495 containerd[1983]: time="2025-07-06T23:08:45.107565153Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:08:45.108495 containerd[1983]: time="2025-07-06T23:08:45.107729097Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:08:45.110394 containerd[1983]: time="2025-07-06T23:08:45.110317209Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:08:45.110563 containerd[1983]: time="2025-07-06T23:08:45.110419113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:08:45.110563 containerd[1983]: time="2025-07-06T23:08:45.110459133Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:08:45.111870 containerd[1983]: time="2025-07-06T23:08:45.111349125Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:08:45.114058 containerd[1983]: time="2025-07-06T23:08:45.113974641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114351429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114521481Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114559365Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114611073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114648237Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114680289Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.114742 containerd[1983]: time="2025-07-06T23:08:45.114740817Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.115215 containerd[1983]: time="2025-07-06T23:08:45.114780357Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116514573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116574273Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116604741Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116646657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116678325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116714469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116745717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116775021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116807301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116845281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116876121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116905833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116939961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.118512 containerd[1983]: time="2025-07-06T23:08:45.116969469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.116997933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.117029025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.117062301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.117113973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.117153021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.119153 containerd[1983]: time="2025-07-06T23:08:45.117180297Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120416973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120514149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120542361Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120591585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120616377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120661737Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120684993Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:08:45.122163 containerd[1983]: time="2025-07-06T23:08:45.120709389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:08:45.122650 containerd[1983]: time="2025-07-06T23:08:45.121330125Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:08:45.122650 containerd[1983]: time="2025-07-06T23:08:45.121415457Z" level=info msg="Connect containerd service" Jul 6 23:08:45.127058 containerd[1983]: time="2025-07-06T23:08:45.126519405Z" level=info msg="using legacy CRI server" Jul 6 23:08:45.127058 containerd[1983]: time="2025-07-06T23:08:45.126570069Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:08:45.127209 containerd[1983]: time="2025-07-06T23:08:45.127157661Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:08:45.135576 containerd[1983]: time="2025-07-06T23:08:45.134575221Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:08:45.138128 containerd[1983]: time="2025-07-06T23:08:45.138072501Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:08:45.138331 containerd[1983]: time="2025-07-06T23:08:45.138303789Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:08:45.140127 containerd[1983]: time="2025-07-06T23:08:45.140060697Z" level=info msg="Start subscribing containerd event" Jul 6 23:08:45.142549 containerd[1983]: time="2025-07-06T23:08:45.142342665Z" level=info msg="Start recovering state" Jul 6 23:08:45.145669 containerd[1983]: time="2025-07-06T23:08:45.145620729Z" level=info msg="Start event monitor" Jul 6 23:08:45.145828 containerd[1983]: time="2025-07-06T23:08:45.145802109Z" level=info msg="Start snapshots syncer" Jul 6 23:08:45.147041 containerd[1983]: time="2025-07-06T23:08:45.146514693Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:08:45.147041 containerd[1983]: time="2025-07-06T23:08:45.146542785Z" level=info msg="Start streaming server" Jul 6 23:08:45.147041 containerd[1983]: time="2025-07-06T23:08:45.146825745Z" level=info msg="containerd successfully booted in 0.279559s" Jul 6 23:08:45.146954 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:08:45.203448 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO http_proxy: Jul 6 23:08:45.301937 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:08:45.405138 amazon-ssm-agent[2124]: 2025-07-06 23:08:44 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:08:45.501594 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO Agent will take identity from EC2 Jul 6 23:08:45.601259 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:08:45.702212 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:08:45.799990 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:08:45.900262 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 6 23:08:46.000613 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 6 23:08:46.001232 tar[1959]: linux-arm64/README.md Jul 6 23:08:46.032367 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:08:46.100450 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:08:46.103600 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:08:46.151938 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:08:46.156289 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 6 23:08:46.156289 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [Registrar] Starting registrar module Jul 6 23:08:46.156443 amazon-ssm-agent[2124]: 2025-07-06 23:08:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 6 23:08:46.156443 amazon-ssm-agent[2124]: 2025-07-06 23:08:46 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:08:46.156443 amazon-ssm-agent[2124]: 2025-07-06 23:08:46 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:08:46.156443 amazon-ssm-agent[2124]: 2025-07-06 23:08:46 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:08:46.156443 amazon-ssm-agent[2124]: 2025-07-06 23:08:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:08:46.166274 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:08:46.174248 systemd[1]: Started sshd@0-172.31.22.108:22-147.75.109.163:45002.service - OpenSSH per-connection server daemon (147.75.109.163:45002). Jul 6 23:08:46.194687 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:08:46.195358 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:08:46.201719 amazon-ssm-agent[2124]: 2025-07-06 23:08:46 INFO [CredentialRefresher] Next credential rotation will be in 31.091658515466666 minutes Jul 6 23:08:46.211972 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:08:46.249058 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:08:46.264046 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:08:46.283334 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:08:46.287445 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:08:46.416869 sshd[2174]: Accepted publickey for core from 147.75.109.163 port 45002 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:46.421250 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:46.436060 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:08:46.448108 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:08:46.477010 systemd-logind[1948]: New session 1 of user core. Jul 6 23:08:46.489124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:08:46.506924 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:08:46.520967 (systemd)[2185]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:08:46.527283 systemd-logind[1948]: New session c1 of user core. Jul 6 23:08:46.749891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:46.757066 ntpd[1942]: Listen normally on 6 eth0 [fe80::49f:43ff:fe2a:42e7%2]:123 Jul 6 23:08:46.757786 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:08:46.763823 ntpd[1942]: 6 Jul 23:08:46 ntpd[1942]: Listen normally on 6 eth0 [fe80::49f:43ff:fe2a:42e7%2]:123 Jul 6 23:08:46.773106 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:46.846558 systemd[2185]: Queued start job for default target default.target. Jul 6 23:08:46.852644 systemd[2185]: Created slice app.slice - User Application Slice. Jul 6 23:08:46.852711 systemd[2185]: Reached target paths.target - Paths. Jul 6 23:08:46.852801 systemd[2185]: Reached target timers.target - Timers. Jul 6 23:08:46.857716 systemd[2185]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:08:46.892842 systemd[2185]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:08:46.893305 systemd[2185]: Reached target sockets.target - Sockets. Jul 6 23:08:46.893622 systemd[2185]: Reached target basic.target - Basic System. Jul 6 23:08:46.893838 systemd[2185]: Reached target default.target - Main User Target. Jul 6 23:08:46.893877 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:08:46.894966 systemd[2185]: Startup finished in 353ms. Jul 6 23:08:46.906054 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:08:46.909895 systemd[1]: Startup finished in 1.099s (kernel) + 8.294s (initrd) + 8.872s (userspace) = 18.265s. Jul 6 23:08:47.079668 systemd[1]: Started sshd@1-172.31.22.108:22-147.75.109.163:35552.service - OpenSSH per-connection server daemon (147.75.109.163:35552). Jul 6 23:08:47.192494 amazon-ssm-agent[2124]: 2025-07-06 23:08:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:08:47.268537 sshd[2210]: Accepted publickey for core from 147.75.109.163 port 35552 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:47.272230 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:47.288756 systemd-logind[1948]: New session 2 of user core. Jul 6 23:08:47.293654 amazon-ssm-agent[2124]: 2025-07-06 23:08:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2213) started Jul 6 23:08:47.295726 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:08:47.394849 amazon-ssm-agent[2124]: 2025-07-06 23:08:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:08:47.434058 sshd[2218]: Connection closed by 147.75.109.163 port 35552 Jul 6 23:08:47.435630 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:47.446712 systemd[1]: sshd@1-172.31.22.108:22-147.75.109.163:35552.service: Deactivated successfully. Jul 6 23:08:47.451008 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:08:47.452980 systemd-logind[1948]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:08:47.472577 systemd-logind[1948]: Removed session 2. Jul 6 23:08:47.479992 systemd[1]: Started sshd@2-172.31.22.108:22-147.75.109.163:35568.service - OpenSSH per-connection server daemon (147.75.109.163:35568). Jul 6 23:08:47.730309 sshd[2227]: Accepted publickey for core from 147.75.109.163 port 35568 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:47.733330 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:47.747930 systemd-logind[1948]: New session 3 of user core. Jul 6 23:08:47.754030 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:08:47.827992 kubelet[2196]: E0706 23:08:47.827927 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:47.832510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:47.832838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:47.833427 systemd[1]: kubelet.service: Consumed 1.425s CPU time, 259.7M memory peak. Jul 6 23:08:47.874528 sshd[2231]: Connection closed by 147.75.109.163 port 35568 Jul 6 23:08:47.875817 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:47.881690 systemd[1]: sshd@2-172.31.22.108:22-147.75.109.163:35568.service: Deactivated successfully. Jul 6 23:08:47.884349 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:08:47.888582 systemd-logind[1948]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:08:47.890200 systemd-logind[1948]: Removed session 3. Jul 6 23:08:47.916997 systemd[1]: Started sshd@3-172.31.22.108:22-147.75.109.163:35572.service - OpenSSH per-connection server daemon (147.75.109.163:35572). Jul 6 23:08:48.098233 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 35572 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:48.100987 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:48.109638 systemd-logind[1948]: New session 4 of user core. Jul 6 23:08:48.116748 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:08:48.241548 sshd[2242]: Connection closed by 147.75.109.163 port 35572 Jul 6 23:08:48.241400 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:48.247540 systemd[1]: sshd@3-172.31.22.108:22-147.75.109.163:35572.service: Deactivated successfully. Jul 6 23:08:48.251228 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:08:48.254429 systemd-logind[1948]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:08:48.256355 systemd-logind[1948]: Removed session 4. Jul 6 23:08:48.289951 systemd[1]: Started sshd@4-172.31.22.108:22-147.75.109.163:35578.service - OpenSSH per-connection server daemon (147.75.109.163:35578). Jul 6 23:08:48.463901 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 35578 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:48.466284 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:48.474148 systemd-logind[1948]: New session 5 of user core. Jul 6 23:08:48.486720 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:08:48.608662 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:08:48.609293 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:08:48.628063 sudo[2251]: pam_unix(sudo:session): session closed for user root Jul 6 23:08:48.650870 sshd[2250]: Connection closed by 147.75.109.163 port 35578 Jul 6 23:08:48.652085 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:48.656913 systemd[1]: sshd@4-172.31.22.108:22-147.75.109.163:35578.service: Deactivated successfully. Jul 6 23:08:48.660212 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:08:48.662881 systemd-logind[1948]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:08:48.664885 systemd-logind[1948]: Removed session 5. Jul 6 23:08:48.695938 systemd[1]: Started sshd@5-172.31.22.108:22-147.75.109.163:35582.service - OpenSSH per-connection server daemon (147.75.109.163:35582). Jul 6 23:08:48.879300 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 35582 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:48.881707 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:48.891850 systemd-logind[1948]: New session 6 of user core. Jul 6 23:08:48.901707 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:08:49.005369 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:08:49.006054 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:08:49.012336 sudo[2261]: pam_unix(sudo:session): session closed for user root Jul 6 23:08:49.022401 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:08:49.023040 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:08:49.046373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:08:49.099533 augenrules[2283]: No rules Jul 6 23:08:49.102800 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:08:49.103244 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:08:49.106032 sudo[2260]: pam_unix(sudo:session): session closed for user root Jul 6 23:08:49.129733 sshd[2259]: Connection closed by 147.75.109.163 port 35582 Jul 6 23:08:49.130634 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:49.137054 systemd[1]: sshd@5-172.31.22.108:22-147.75.109.163:35582.service: Deactivated successfully. Jul 6 23:08:49.141275 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:08:49.144152 systemd-logind[1948]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:08:49.146137 systemd-logind[1948]: Removed session 6. Jul 6 23:08:49.171000 systemd[1]: Started sshd@6-172.31.22.108:22-147.75.109.163:35596.service - OpenSSH per-connection server daemon (147.75.109.163:35596). Jul 6 23:08:49.345224 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 35596 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:08:49.347614 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:08:49.356774 systemd-logind[1948]: New session 7 of user core. Jul 6 23:08:49.365772 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:08:49.468022 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:08:49.468663 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:08:50.031104 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:08:50.031150 (dockerd)[2314]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:08:50.434093 dockerd[2314]: time="2025-07-06T23:08:50.433450251Z" level=info msg="Starting up" Jul 6 23:08:50.568366 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2200411550-merged.mount: Deactivated successfully. Jul 6 23:08:50.652882 systemd[1]: var-lib-docker-metacopy\x2dcheck3776665721-merged.mount: Deactivated successfully. Jul 6 23:08:50.667293 dockerd[2314]: time="2025-07-06T23:08:50.667230713Z" level=info msg="Loading containers: start." Jul 6 23:08:50.907515 kernel: Initializing XFRM netlink socket Jul 6 23:08:50.942443 (udev-worker)[2338]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:08:51.030574 systemd-networkd[1879]: docker0: Link UP Jul 6 23:08:51.064911 dockerd[2314]: time="2025-07-06T23:08:51.064756033Z" level=info msg="Loading containers: done." Jul 6 23:08:51.089331 dockerd[2314]: time="2025-07-06T23:08:51.089252567Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:08:51.089584 dockerd[2314]: time="2025-07-06T23:08:51.089398438Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:08:51.089793 dockerd[2314]: time="2025-07-06T23:08:51.089741478Z" level=info msg="Daemon has completed initialization" Jul 6 23:08:51.145817 dockerd[2314]: time="2025-07-06T23:08:51.145684277Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:08:51.146061 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:08:52.237484 containerd[1983]: time="2025-07-06T23:08:52.237069291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:08:52.850604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856211089.mount: Deactivated successfully. Jul 6 23:08:54.258533 containerd[1983]: time="2025-07-06T23:08:54.258427340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:54.260655 containerd[1983]: time="2025-07-06T23:08:54.260574494Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 6 23:08:54.263316 containerd[1983]: time="2025-07-06T23:08:54.263239921Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:54.269107 containerd[1983]: time="2025-07-06T23:08:54.269010558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:54.271436 containerd[1983]: time="2025-07-06T23:08:54.271182119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.034050352s" Jul 6 23:08:54.271436 containerd[1983]: time="2025-07-06T23:08:54.271243793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:08:54.272460 containerd[1983]: time="2025-07-06T23:08:54.272391199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:08:55.608020 containerd[1983]: time="2025-07-06T23:08:55.607952572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:55.611671 containerd[1983]: time="2025-07-06T23:08:55.611595450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 6 23:08:55.611888 containerd[1983]: time="2025-07-06T23:08:55.611694628Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:55.621647 containerd[1983]: time="2025-07-06T23:08:55.621572527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:55.624065 containerd[1983]: time="2025-07-06T23:08:55.623820730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.351364332s" Jul 6 23:08:55.624065 containerd[1983]: time="2025-07-06T23:08:55.623874775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:08:55.624993 containerd[1983]: time="2025-07-06T23:08:55.624694277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:08:56.758636 containerd[1983]: time="2025-07-06T23:08:56.758560336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:56.760603 containerd[1983]: time="2025-07-06T23:08:56.760534512Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 6 23:08:56.761510 containerd[1983]: time="2025-07-06T23:08:56.761110128Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:56.766460 containerd[1983]: time="2025-07-06T23:08:56.766383482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:56.769000 containerd[1983]: time="2025-07-06T23:08:56.768808597Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.14406181s" Jul 6 23:08:56.769000 containerd[1983]: time="2025-07-06T23:08:56.768865089Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:08:56.770624 containerd[1983]: time="2025-07-06T23:08:56.770411763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:08:57.939942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:08:57.952760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:58.029599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484404988.mount: Deactivated successfully. Jul 6 23:08:58.341716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:58.356585 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:58.450628 kubelet[2581]: E0706 23:08:58.450498 2581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:58.459130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:58.459908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:58.462352 systemd[1]: kubelet.service: Consumed 323ms CPU time, 109.7M memory peak. Jul 6 23:08:58.783970 containerd[1983]: time="2025-07-06T23:08:58.783888783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:58.786675 containerd[1983]: time="2025-07-06T23:08:58.786582024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 6 23:08:58.789133 containerd[1983]: time="2025-07-06T23:08:58.789060823Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:58.793670 containerd[1983]: time="2025-07-06T23:08:58.793572247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:58.795145 containerd[1983]: time="2025-07-06T23:08:58.794942333Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 2.024468681s" Jul 6 23:08:58.795145 containerd[1983]: time="2025-07-06T23:08:58.794998669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:08:58.796099 containerd[1983]: time="2025-07-06T23:08:58.795825440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:08:59.363227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812153254.mount: Deactivated successfully. Jul 6 23:09:00.628596 containerd[1983]: time="2025-07-06T23:09:00.628516789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:00.630771 containerd[1983]: time="2025-07-06T23:09:00.630683637Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:09:00.632933 containerd[1983]: time="2025-07-06T23:09:00.632845555Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:00.639141 containerd[1983]: time="2025-07-06T23:09:00.639047941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:00.641732 containerd[1983]: time="2025-07-06T23:09:00.641523862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.845644234s" Jul 6 23:09:00.641732 containerd[1983]: time="2025-07-06T23:09:00.641578531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:09:00.642640 containerd[1983]: time="2025-07-06T23:09:00.642584348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:09:01.135149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467891714.mount: Deactivated successfully. Jul 6 23:09:01.149164 containerd[1983]: time="2025-07-06T23:09:01.149087263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:01.150995 containerd[1983]: time="2025-07-06T23:09:01.150908444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:09:01.153432 containerd[1983]: time="2025-07-06T23:09:01.153361373Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:01.158613 containerd[1983]: time="2025-07-06T23:09:01.158505120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:01.160246 containerd[1983]: time="2025-07-06T23:09:01.160039104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.239273ms" Jul 6 23:09:01.160246 containerd[1983]: time="2025-07-06T23:09:01.160092310Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:09:01.160909 containerd[1983]: time="2025-07-06T23:09:01.160868861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:09:01.734089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003111853.mount: Deactivated successfully. Jul 6 23:09:04.155194 containerd[1983]: time="2025-07-06T23:09:04.155101909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:04.157611 containerd[1983]: time="2025-07-06T23:09:04.157521842Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 6 23:09:04.159821 containerd[1983]: time="2025-07-06T23:09:04.159708624Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:04.166518 containerd[1983]: time="2025-07-06T23:09:04.166378859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:04.169515 containerd[1983]: time="2025-07-06T23:09:04.168975764Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.007935893s" Jul 6 23:09:04.169515 containerd[1983]: time="2025-07-06T23:09:04.169039548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:09:08.691683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:09:08.702948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:09.093209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:09.107743 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:09.208772 kubelet[2726]: E0706 23:09:09.208561 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:09.213812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:09.214295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:09.216626 systemd[1]: kubelet.service: Consumed 331ms CPU time, 104.9M memory peak. Jul 6 23:09:10.677556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:10.677884 systemd[1]: kubelet.service: Consumed 331ms CPU time, 104.9M memory peak. Jul 6 23:09:10.689942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:10.747702 systemd[1]: Reload requested from client PID 2740 ('systemctl') (unit session-7.scope)... Jul 6 23:09:10.747731 systemd[1]: Reloading... Jul 6 23:09:11.003816 zram_generator::config[2792]: No configuration found. Jul 6 23:09:11.302790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:09:11.544764 systemd[1]: Reloading finished in 796 ms. Jul 6 23:09:11.639987 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:09:11.640200 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:09:11.640867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:11.640951 systemd[1]: kubelet.service: Consumed 241ms CPU time, 94.7M memory peak. Jul 6 23:09:11.648207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:12.073803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:12.085414 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:09:12.166745 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:09:12.168521 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:09:12.168521 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:09:12.168521 kubelet[2849]: I0706 23:09:12.167411 2849 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:09:13.421113 kubelet[2849]: I0706 23:09:13.421063 2849 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:09:13.421710 kubelet[2849]: I0706 23:09:13.421533 2849 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:09:13.424529 kubelet[2849]: I0706 23:09:13.422829 2849 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:09:13.481958 kubelet[2849]: E0706 23:09:13.481886 2849 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:13.488512 kubelet[2849]: I0706 23:09:13.488427 2849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:09:13.499369 kubelet[2849]: E0706 23:09:13.499301 2849 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:09:13.500149 kubelet[2849]: I0706 23:09:13.499573 2849 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:09:13.505655 kubelet[2849]: I0706 23:09:13.505601 2849 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:09:13.507349 kubelet[2849]: I0706 23:09:13.507261 2849 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:09:13.507689 kubelet[2849]: I0706 23:09:13.507337 2849 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:09:13.507875 kubelet[2849]: I0706 23:09:13.507832 2849 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:09:13.507875 kubelet[2849]: I0706 23:09:13.507856 2849 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:09:13.508261 kubelet[2849]: I0706 23:09:13.508208 2849 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:09:13.515241 kubelet[2849]: I0706 23:09:13.515040 2849 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:09:13.515241 kubelet[2849]: I0706 23:09:13.515095 2849 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:09:13.515241 kubelet[2849]: I0706 23:09:13.515130 2849 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:09:13.515241 kubelet[2849]: I0706 23:09:13.515151 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:09:13.525559 kubelet[2849]: W0706 23:09:13.523782 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-108&limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:13.525559 kubelet[2849]: E0706 23:09:13.523913 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-108&limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:13.525559 kubelet[2849]: I0706 23:09:13.524044 2849 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:09:13.525559 kubelet[2849]: I0706 23:09:13.525120 2849 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:09:13.525559 kubelet[2849]: W0706 23:09:13.525356 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:09:13.529575 kubelet[2849]: I0706 23:09:13.529533 2849 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:09:13.529825 kubelet[2849]: I0706 23:09:13.529800 2849 server.go:1287] "Started kubelet" Jul 6 23:09:13.540809 kubelet[2849]: W0706 23:09:13.540699 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:13.540981 kubelet[2849]: E0706 23:09:13.540827 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:13.541458 kubelet[2849]: E0706 23:09:13.540950 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.108:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-108.184fcc3f95453ace default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-108,UID:ip-172-31-22-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-108,},FirstTimestamp:2025-07-06 23:09:13.529760462 +0000 UTC m=+1.435685163,LastTimestamp:2025-07-06 23:09:13.529760462 +0000 UTC m=+1.435685163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-108,}" Jul 6 23:09:13.543449 kubelet[2849]: I0706 23:09:13.543401 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:09:13.548270 kubelet[2849]: I0706 23:09:13.548169 2849 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:09:13.552063 kubelet[2849]: I0706 23:09:13.551995 2849 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:09:13.557648 kubelet[2849]: I0706 23:09:13.557590 2849 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:09:13.558197 kubelet[2849]: E0706 23:09:13.558135 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-108\" not found" Jul 6 23:09:13.560919 kubelet[2849]: I0706 23:09:13.560860 2849 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:09:13.561073 kubelet[2849]: I0706 23:09:13.560979 2849 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:09:13.561391 kubelet[2849]: E0706 23:09:13.561359 2849 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:09:13.561713 kubelet[2849]: I0706 23:09:13.561637 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:09:13.562140 kubelet[2849]: I0706 23:09:13.562111 2849 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:09:13.562539 kubelet[2849]: I0706 23:09:13.562507 2849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:09:13.563950 kubelet[2849]: E0706 23:09:13.563888 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": dial tcp 172.31.22.108:6443: connect: connection refused" interval="200ms" Jul 6 23:09:13.566020 kubelet[2849]: I0706 23:09:13.565961 2849 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:09:13.566621 kubelet[2849]: I0706 23:09:13.566331 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:09:13.568657 kubelet[2849]: W0706 23:09:13.568383 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:13.568971 kubelet[2849]: E0706 23:09:13.568565 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:13.569693 kubelet[2849]: I0706 23:09:13.569623 2849 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:09:13.589534 kubelet[2849]: I0706 23:09:13.588997 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:09:13.591307 kubelet[2849]: I0706 23:09:13.591249 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:09:13.591307 kubelet[2849]: I0706 23:09:13.591306 2849 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:09:13.592778 kubelet[2849]: I0706 23:09:13.591339 2849 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:09:13.592778 kubelet[2849]: I0706 23:09:13.591358 2849 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:09:13.592778 kubelet[2849]: E0706 23:09:13.591433 2849 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:09:13.600009 kubelet[2849]: W0706 23:09:13.599930 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:13.600356 kubelet[2849]: E0706 23:09:13.600317 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:13.617169 kubelet[2849]: I0706 23:09:13.617135 2849 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:09:13.617607 kubelet[2849]: I0706 23:09:13.617438 2849 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:09:13.617607 kubelet[2849]: I0706 23:09:13.617538 2849 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:09:13.621272 kubelet[2849]: I0706 23:09:13.620819 2849 policy_none.go:49] "None policy: Start" Jul 6 23:09:13.621272 kubelet[2849]: I0706 23:09:13.620863 2849 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:09:13.621272 kubelet[2849]: I0706 23:09:13.620887 2849 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:09:13.632162 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:09:13.651841 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:09:13.658904 kubelet[2849]: E0706 23:09:13.658820 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-108\" not found" Jul 6 23:09:13.660632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:09:13.672826 kubelet[2849]: I0706 23:09:13.671182 2849 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:09:13.672826 kubelet[2849]: I0706 23:09:13.671518 2849 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:09:13.672826 kubelet[2849]: I0706 23:09:13.671542 2849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:09:13.672826 kubelet[2849]: I0706 23:09:13.672238 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:09:13.677651 kubelet[2849]: E0706 23:09:13.677436 2849 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:09:13.677651 kubelet[2849]: E0706 23:09:13.677605 2849 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-108\" not found" Jul 6 23:09:13.717042 systemd[1]: Created slice kubepods-burstable-pod8372def66c7cacee6f1602cbe7b99df5.slice - libcontainer container kubepods-burstable-pod8372def66c7cacee6f1602cbe7b99df5.slice. Jul 6 23:09:13.730005 kubelet[2849]: E0706 23:09:13.729027 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:13.736237 systemd[1]: Created slice kubepods-burstable-pod6730ba07950617b145f71f35179673cf.slice - libcontainer container kubepods-burstable-pod6730ba07950617b145f71f35179673cf.slice. Jul 6 23:09:13.741840 kubelet[2849]: E0706 23:09:13.741766 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:13.745528 systemd[1]: Created slice kubepods-burstable-podd0ed2258910edff6c41e3adfc33e2b40.slice - libcontainer container kubepods-burstable-podd0ed2258910edff6c41e3adfc33e2b40.slice. Jul 6 23:09:13.751014 kubelet[2849]: E0706 23:09:13.750947 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:13.762340 kubelet[2849]: I0706 23:09:13.762253 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:13.762555 kubelet[2849]: I0706 23:09:13.762353 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0ed2258910edff6c41e3adfc33e2b40-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-108\" (UID: \"d0ed2258910edff6c41e3adfc33e2b40\") " pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:13.762555 kubelet[2849]: I0706 23:09:13.762399 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-ca-certs\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:13.762555 kubelet[2849]: I0706 23:09:13.762437 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:13.762555 kubelet[2849]: I0706 23:09:13.762514 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:13.762555 kubelet[2849]: I0706 23:09:13.762553 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:13.763001 kubelet[2849]: I0706 23:09:13.762588 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:13.763001 kubelet[2849]: I0706 23:09:13.762622 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:13.763001 kubelet[2849]: I0706 23:09:13.762661 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:13.765830 kubelet[2849]: E0706 23:09:13.765775 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": dial tcp 172.31.22.108:6443: connect: connection refused" interval="400ms" Jul 6 23:09:13.774182 kubelet[2849]: I0706 23:09:13.774138 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:13.774981 kubelet[2849]: E0706 23:09:13.774908 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.108:6443/api/v1/nodes\": dial tcp 172.31.22.108:6443: connect: connection refused" node="ip-172-31-22-108" Jul 6 23:09:13.978044 kubelet[2849]: I0706 23:09:13.977985 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:13.978779 kubelet[2849]: E0706 23:09:13.978697 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.108:6443/api/v1/nodes\": dial tcp 172.31.22.108:6443: connect: connection refused" node="ip-172-31-22-108" Jul 6 23:09:14.032262 containerd[1983]: time="2025-07-06T23:09:14.032190515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-108,Uid:8372def66c7cacee6f1602cbe7b99df5,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:14.044155 containerd[1983]: time="2025-07-06T23:09:14.043993607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-108,Uid:6730ba07950617b145f71f35179673cf,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:14.053118 containerd[1983]: time="2025-07-06T23:09:14.052683368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-108,Uid:d0ed2258910edff6c41e3adfc33e2b40,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:14.167170 kubelet[2849]: E0706 23:09:14.167092 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": dial tcp 172.31.22.108:6443: connect: connection refused" interval="800ms" Jul 6 23:09:14.344722 kubelet[2849]: W0706 23:09:14.344446 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-108&limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:14.344722 kubelet[2849]: E0706 23:09:14.344586 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-108&limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:14.383208 kubelet[2849]: I0706 23:09:14.383114 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:14.383836 kubelet[2849]: E0706 23:09:14.383781 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.108:6443/api/v1/nodes\": dial tcp 172.31.22.108:6443: connect: connection refused" node="ip-172-31-22-108" Jul 6 23:09:14.436066 kubelet[2849]: W0706 23:09:14.435980 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:14.436726 kubelet[2849]: E0706 23:09:14.436080 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:14.500034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846749461.mount: Deactivated successfully. Jul 6 23:09:14.506813 containerd[1983]: time="2025-07-06T23:09:14.506728737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:09:14.508999 containerd[1983]: time="2025-07-06T23:09:14.508917774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:09:14.511455 containerd[1983]: time="2025-07-06T23:09:14.511361203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 6 23:09:14.511959 containerd[1983]: time="2025-07-06T23:09:14.511873527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:09:14.515387 containerd[1983]: time="2025-07-06T23:09:14.515109005Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:09:14.517554 containerd[1983]: time="2025-07-06T23:09:14.517302455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:09:14.521870 containerd[1983]: time="2025-07-06T23:09:14.521768037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:09:14.527912 containerd[1983]: time="2025-07-06T23:09:14.527416972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.283779ms" Jul 6 23:09:14.529734 containerd[1983]: time="2025-07-06T23:09:14.529635202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:09:14.534312 containerd[1983]: time="2025-07-06T23:09:14.534204040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.888188ms" Jul 6 23:09:14.556732 containerd[1983]: time="2025-07-06T23:09:14.555245774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.4461ms" Jul 6 23:09:14.781856 containerd[1983]: time="2025-07-06T23:09:14.781254342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:14.782134 containerd[1983]: time="2025-07-06T23:09:14.781569293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:14.782134 containerd[1983]: time="2025-07-06T23:09:14.781608957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.782134 containerd[1983]: time="2025-07-06T23:09:14.781777317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.792767 containerd[1983]: time="2025-07-06T23:09:14.792055131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:14.792767 containerd[1983]: time="2025-07-06T23:09:14.792153734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:14.792767 containerd[1983]: time="2025-07-06T23:09:14.792182651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.793377 containerd[1983]: time="2025-07-06T23:09:14.791158112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:14.793377 containerd[1983]: time="2025-07-06T23:09:14.791283065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:14.793377 containerd[1983]: time="2025-07-06T23:09:14.791313794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.793377 containerd[1983]: time="2025-07-06T23:09:14.791584678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.797663 containerd[1983]: time="2025-07-06T23:09:14.794802453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:14.846112 systemd[1]: Started cri-containerd-83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff.scope - libcontainer container 83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff. Jul 6 23:09:14.862510 systemd[1]: Started cri-containerd-b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd.scope - libcontainer container b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd. Jul 6 23:09:14.879663 systemd[1]: Started cri-containerd-2bfb57ebb8b0b866a654dfb38e7e9a75d0e34d8c5c7d4085e6ae1d5e9e29ae50.scope - libcontainer container 2bfb57ebb8b0b866a654dfb38e7e9a75d0e34d8c5c7d4085e6ae1d5e9e29ae50. Jul 6 23:09:14.899040 kubelet[2849]: W0706 23:09:14.898971 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:14.899188 kubelet[2849]: E0706 23:09:14.899055 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:14.971298 kubelet[2849]: E0706 23:09:14.970751 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": dial tcp 172.31.22.108:6443: connect: connection refused" interval="1.6s" Jul 6 23:09:14.972523 kubelet[2849]: W0706 23:09:14.972238 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.108:6443: connect: connection refused Jul 6 23:09:14.972523 kubelet[2849]: E0706 23:09:14.972328 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.108:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:09:14.976981 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:09:15.002210 containerd[1983]: time="2025-07-06T23:09:15.002110839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-108,Uid:6730ba07950617b145f71f35179673cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bfb57ebb8b0b866a654dfb38e7e9a75d0e34d8c5c7d4085e6ae1d5e9e29ae50\"" Jul 6 23:09:15.018613 containerd[1983]: time="2025-07-06T23:09:15.017728755Z" level=info msg="CreateContainer within sandbox \"2bfb57ebb8b0b866a654dfb38e7e9a75d0e34d8c5c7d4085e6ae1d5e9e29ae50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:09:15.039115 containerd[1983]: time="2025-07-06T23:09:15.038923076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-108,Uid:d0ed2258910edff6c41e3adfc33e2b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff\"" Jul 6 23:09:15.047284 containerd[1983]: time="2025-07-06T23:09:15.047168160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-108,Uid:8372def66c7cacee6f1602cbe7b99df5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd\"" Jul 6 23:09:15.054341 containerd[1983]: time="2025-07-06T23:09:15.054194461Z" level=info msg="CreateContainer within sandbox \"83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:09:15.054971 containerd[1983]: time="2025-07-06T23:09:15.054893987Z" level=info msg="CreateContainer within sandbox \"b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:09:15.076709 containerd[1983]: time="2025-07-06T23:09:15.076596098Z" level=info msg="CreateContainer within sandbox \"2bfb57ebb8b0b866a654dfb38e7e9a75d0e34d8c5c7d4085e6ae1d5e9e29ae50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c97ab0f3900977716cb4a06a5eda342fc1948fe85338cb3b1789513e34002add\"" Jul 6 23:09:15.078003 containerd[1983]: time="2025-07-06T23:09:15.077809915Z" level=info msg="StartContainer for \"c97ab0f3900977716cb4a06a5eda342fc1948fe85338cb3b1789513e34002add\"" Jul 6 23:09:15.099665 containerd[1983]: time="2025-07-06T23:09:15.099092441Z" level=info msg="CreateContainer within sandbox \"83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14\"" Jul 6 23:09:15.100563 containerd[1983]: time="2025-07-06T23:09:15.100365388Z" level=info msg="StartContainer for \"cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14\"" Jul 6 23:09:15.110540 containerd[1983]: time="2025-07-06T23:09:15.109808720Z" level=info msg="CreateContainer within sandbox \"b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731\"" Jul 6 23:09:15.110768 containerd[1983]: time="2025-07-06T23:09:15.110708126Z" level=info msg="StartContainer for \"4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731\"" Jul 6 23:09:15.156358 systemd[1]: Started cri-containerd-c97ab0f3900977716cb4a06a5eda342fc1948fe85338cb3b1789513e34002add.scope - libcontainer container c97ab0f3900977716cb4a06a5eda342fc1948fe85338cb3b1789513e34002add. Jul 6 23:09:15.189892 kubelet[2849]: I0706 23:09:15.189819 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:15.190998 systemd[1]: Started cri-containerd-cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14.scope - libcontainer container cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14. Jul 6 23:09:15.193587 kubelet[2849]: E0706 23:09:15.191903 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.108:6443/api/v1/nodes\": dial tcp 172.31.22.108:6443: connect: connection refused" node="ip-172-31-22-108" Jul 6 23:09:15.231822 systemd[1]: Started cri-containerd-4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731.scope - libcontainer container 4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731. Jul 6 23:09:15.314747 containerd[1983]: time="2025-07-06T23:09:15.314592681Z" level=info msg="StartContainer for \"c97ab0f3900977716cb4a06a5eda342fc1948fe85338cb3b1789513e34002add\" returns successfully" Jul 6 23:09:15.381320 containerd[1983]: time="2025-07-06T23:09:15.381256401Z" level=info msg="StartContainer for \"cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14\" returns successfully" Jul 6 23:09:15.394336 containerd[1983]: time="2025-07-06T23:09:15.394263534Z" level=info msg="StartContainer for \"4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731\" returns successfully" Jul 6 23:09:15.623408 kubelet[2849]: E0706 23:09:15.623262 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:15.630548 kubelet[2849]: E0706 23:09:15.629717 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:15.636618 kubelet[2849]: E0706 23:09:15.636149 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:16.640535 kubelet[2849]: E0706 23:09:16.640058 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:16.641535 kubelet[2849]: E0706 23:09:16.641200 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:16.795527 kubelet[2849]: I0706 23:09:16.794971 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:17.785179 kubelet[2849]: E0706 23:09:17.785119 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:19.822129 kubelet[2849]: E0706 23:09:19.820991 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-108\" not found" node="ip-172-31-22-108" Jul 6 23:09:19.898731 kubelet[2849]: E0706 23:09:19.898299 2849 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-108.184fcc3f95453ace default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-108,UID:ip-172-31-22-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-108,},FirstTimestamp:2025-07-06 23:09:13.529760462 +0000 UTC m=+1.435685163,LastTimestamp:2025-07-06 23:09:13.529760462 +0000 UTC m=+1.435685163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-108,}" Jul 6 23:09:19.959069 kubelet[2849]: I0706 23:09:19.958998 2849 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-108" Jul 6 23:09:19.959069 kubelet[2849]: E0706 23:09:19.959066 2849 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-108\": node \"ip-172-31-22-108\" not found" Jul 6 23:09:19.959612 kubelet[2849]: E0706 23:09:19.959450 2849 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-108.184fcc3f97271056 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-108,UID:ip-172-31-22-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-22-108,},FirstTimestamp:2025-07-06 23:09:13.561337942 +0000 UTC m=+1.467262655,LastTimestamp:2025-07-06 23:09:13.561337942 +0000 UTC m=+1.467262655,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-108,}" Jul 6 23:09:20.047171 kubelet[2849]: E0706 23:09:20.046727 2849 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-108.184fcc3f9a685ee3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-108,UID:ip-172-31-22-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-22-108 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-22-108,},FirstTimestamp:2025-07-06 23:09:13.615949539 +0000 UTC m=+1.521874216,LastTimestamp:2025-07-06 23:09:13.615949539 +0000 UTC m=+1.521874216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-108,}" Jul 6 23:09:20.058936 kubelet[2849]: I0706 23:09:20.058871 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:20.098837 kubelet[2849]: E0706 23:09:20.098042 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:20.098837 kubelet[2849]: I0706 23:09:20.098101 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:20.105730 kubelet[2849]: E0706 23:09:20.105650 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:20.105730 kubelet[2849]: I0706 23:09:20.105711 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:20.114051 kubelet[2849]: E0706 23:09:20.113984 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:20.539990 kubelet[2849]: I0706 23:09:20.539575 2849 apiserver.go:52] "Watching apiserver" Jul 6 23:09:20.562081 kubelet[2849]: I0706 23:09:20.562019 2849 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:09:21.311886 kubelet[2849]: I0706 23:09:21.311498 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:22.076617 systemd[1]: Reload requested from client PID 3130 ('systemctl') (unit session-7.scope)... Jul 6 23:09:22.077099 systemd[1]: Reloading... Jul 6 23:09:22.266553 zram_generator::config[3176]: No configuration found. Jul 6 23:09:22.494962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:09:22.764020 systemd[1]: Reloading finished in 686 ms. Jul 6 23:09:22.804557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:22.820570 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:09:22.821074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:22.821175 systemd[1]: kubelet.service: Consumed 2.344s CPU time, 130.4M memory peak. Jul 6 23:09:22.828008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:23.204592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:23.221433 (kubelet)[3235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:09:23.320044 kubelet[3235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:09:23.320044 kubelet[3235]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:09:23.320044 kubelet[3235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:09:23.320629 kubelet[3235]: I0706 23:09:23.320142 3235 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:09:23.350528 kubelet[3235]: I0706 23:09:23.349359 3235 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:09:23.350528 kubelet[3235]: I0706 23:09:23.349412 3235 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:09:23.350528 kubelet[3235]: I0706 23:09:23.349949 3235 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:09:23.352680 kubelet[3235]: I0706 23:09:23.352395 3235 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:09:23.357770 kubelet[3235]: I0706 23:09:23.356802 3235 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:09:23.362343 sudo[3250]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:09:23.364397 sudo[3250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:09:23.369281 kubelet[3235]: E0706 23:09:23.368625 3235 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:09:23.369281 kubelet[3235]: I0706 23:09:23.368684 3235 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:09:23.378511 kubelet[3235]: I0706 23:09:23.376280 3235 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:09:23.380104 kubelet[3235]: I0706 23:09:23.379742 3235 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:09:23.381171 kubelet[3235]: I0706 23:09:23.380371 3235 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:09:23.381171 kubelet[3235]: I0706 23:09:23.380933 3235 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:09:23.381171 kubelet[3235]: I0706 23:09:23.380980 3235 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:09:23.381171 kubelet[3235]: I0706 23:09:23.381078 3235 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:09:23.381604 kubelet[3235]: I0706 23:09:23.381323 3235 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:09:23.381604 kubelet[3235]: I0706 23:09:23.381346 3235 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:09:23.381604 kubelet[3235]: I0706 23:09:23.381393 3235 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:09:23.381604 kubelet[3235]: I0706 23:09:23.381415 3235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:09:23.398425 kubelet[3235]: I0706 23:09:23.395431 3235 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:09:23.398425 kubelet[3235]: I0706 23:09:23.396268 3235 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:09:23.398425 kubelet[3235]: I0706 23:09:23.397053 3235 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:09:23.398425 kubelet[3235]: I0706 23:09:23.397099 3235 server.go:1287] "Started kubelet" Jul 6 23:09:23.418375 kubelet[3235]: I0706 23:09:23.417421 3235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:09:23.428657 kubelet[3235]: I0706 23:09:23.428561 3235 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:09:23.435892 kubelet[3235]: I0706 23:09:23.435854 3235 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:09:23.438295 kubelet[3235]: I0706 23:09:23.438193 3235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:09:23.439037 kubelet[3235]: I0706 23:09:23.439006 3235 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:09:23.439581 kubelet[3235]: I0706 23:09:23.439542 3235 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:09:23.446153 kubelet[3235]: I0706 23:09:23.446118 3235 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:09:23.473768 kubelet[3235]: I0706 23:09:23.446353 3235 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:09:23.477878 kubelet[3235]: E0706 23:09:23.446579 3235 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-108\" not found" Jul 6 23:09:23.482606 kubelet[3235]: I0706 23:09:23.476863 3235 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:09:23.484788 kubelet[3235]: I0706 23:09:23.481001 3235 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:09:23.485700 kubelet[3235]: I0706 23:09:23.485093 3235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:09:23.492037 kubelet[3235]: I0706 23:09:23.491998 3235 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:09:23.496700 kubelet[3235]: E0706 23:09:23.493843 3235 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:09:23.520264 kubelet[3235]: I0706 23:09:23.520006 3235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:09:23.549814 kubelet[3235]: I0706 23:09:23.549701 3235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:09:23.558853 kubelet[3235]: I0706 23:09:23.556277 3235 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:09:23.558853 kubelet[3235]: I0706 23:09:23.556325 3235 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:09:23.558853 kubelet[3235]: I0706 23:09:23.556345 3235 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:09:23.558853 kubelet[3235]: E0706 23:09:23.556421 3235 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:09:23.656602 kubelet[3235]: E0706 23:09:23.656538 3235 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:09:23.679167 kubelet[3235]: I0706 23:09:23.679134 3235 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:09:23.679807 kubelet[3235]: I0706 23:09:23.679677 3235 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:09:23.679807 kubelet[3235]: I0706 23:09:23.679720 3235 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:09:23.680032 kubelet[3235]: I0706 23:09:23.679998 3235 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:09:23.680099 kubelet[3235]: I0706 23:09:23.680032 3235 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:09:23.680099 kubelet[3235]: I0706 23:09:23.680070 3235 policy_none.go:49] "None policy: Start" Jul 6 23:09:23.680099 kubelet[3235]: I0706 23:09:23.680089 3235 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:09:23.680241 kubelet[3235]: I0706 23:09:23.680109 3235 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:09:23.680310 kubelet[3235]: I0706 23:09:23.680288 3235 state_mem.go:75] "Updated machine memory state" Jul 6 23:09:23.706563 kubelet[3235]: I0706 23:09:23.704307 3235 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:09:23.706563 kubelet[3235]: I0706 23:09:23.704616 3235 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:09:23.706563 kubelet[3235]: I0706 23:09:23.704638 3235 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:09:23.707566 kubelet[3235]: I0706 23:09:23.707535 3235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:09:23.710111 kubelet[3235]: E0706 23:09:23.709975 3235 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:09:23.837621 kubelet[3235]: I0706 23:09:23.837454 3235 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-108" Jul 6 23:09:23.859514 kubelet[3235]: I0706 23:09:23.857976 3235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.859514 kubelet[3235]: I0706 23:09:23.859172 3235 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-108" Jul 6 23:09:23.859514 kubelet[3235]: I0706 23:09:23.859280 3235 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-108" Jul 6 23:09:23.860650 kubelet[3235]: I0706 23:09:23.860602 3235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:23.871415 kubelet[3235]: I0706 23:09:23.869543 3235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:23.880865 kubelet[3235]: E0706 23:09:23.880800 3235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-108\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.894084 kubelet[3235]: I0706 23:09:23.894025 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.894257 kubelet[3235]: I0706 23:09:23.894100 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:23.894257 kubelet[3235]: I0706 23:09:23.894140 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.894257 kubelet[3235]: I0706 23:09:23.894179 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.894257 kubelet[3235]: I0706 23:09:23.894224 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:23.894491 kubelet[3235]: I0706 23:09:23.894261 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0ed2258910edff6c41e3adfc33e2b40-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-108\" (UID: \"d0ed2258910edff6c41e3adfc33e2b40\") " pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:23.894491 kubelet[3235]: I0706 23:09:23.894295 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-ca-certs\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:23.894491 kubelet[3235]: I0706 23:09:23.894353 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6730ba07950617b145f71f35179673cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-108\" (UID: \"6730ba07950617b145f71f35179673cf\") " pod="kube-system/kube-apiserver-ip-172-31-22-108" Jul 6 23:09:23.894491 kubelet[3235]: I0706 23:09:23.894395 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8372def66c7cacee6f1602cbe7b99df5-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-108\" (UID: \"8372def66c7cacee6f1602cbe7b99df5\") " pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:24.328546 sudo[3250]: pam_unix(sudo:session): session closed for user root Jul 6 23:09:24.395429 kubelet[3235]: I0706 23:09:24.395368 3235 apiserver.go:52] "Watching apiserver" Jul 6 23:09:24.477995 kubelet[3235]: I0706 23:09:24.477821 3235 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:09:24.626829 kubelet[3235]: I0706 23:09:24.626682 3235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:24.627177 kubelet[3235]: I0706 23:09:24.627140 3235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:24.647751 kubelet[3235]: E0706 23:09:24.647688 3235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-108\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-108" Jul 6 23:09:24.658497 kubelet[3235]: E0706 23:09:24.656927 3235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-108\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-108" Jul 6 23:09:24.685291 kubelet[3235]: I0706 23:09:24.685196 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-108" podStartSLOduration=3.68515183 podStartE2EDuration="3.68515183s" podCreationTimestamp="2025-07-06 23:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:09:24.683219896 +0000 UTC m=+1.453543846" watchObservedRunningTime="2025-07-06 23:09:24.68515183 +0000 UTC m=+1.455475780" Jul 6 23:09:24.711221 kubelet[3235]: I0706 23:09:24.711133 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-108" podStartSLOduration=1.711009215 podStartE2EDuration="1.711009215s" podCreationTimestamp="2025-07-06 23:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:09:24.707421329 +0000 UTC m=+1.477745315" watchObservedRunningTime="2025-07-06 23:09:24.711009215 +0000 UTC m=+1.481333165" Jul 6 23:09:24.756491 kubelet[3235]: I0706 23:09:24.754793 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-108" podStartSLOduration=1.7547699589999999 podStartE2EDuration="1.754769959s" podCreationTimestamp="2025-07-06 23:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:09:24.731753365 +0000 UTC m=+1.502077315" watchObservedRunningTime="2025-07-06 23:09:24.754769959 +0000 UTC m=+1.525093945" Jul 6 23:09:27.613593 kubelet[3235]: I0706 23:09:27.613548 3235 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:09:27.615776 kubelet[3235]: I0706 23:09:27.615608 3235 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:09:27.615917 containerd[1983]: time="2025-07-06T23:09:27.615011508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:09:27.932661 sudo[2295]: pam_unix(sudo:session): session closed for user root Jul 6 23:09:27.955259 sshd[2294]: Connection closed by 147.75.109.163 port 35596 Jul 6 23:09:27.956141 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:27.962334 systemd[1]: sshd@6-172.31.22.108:22-147.75.109.163:35596.service: Deactivated successfully. Jul 6 23:09:27.966390 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:09:27.966835 systemd[1]: session-7.scope: Consumed 11.054s CPU time, 267.4M memory peak. Jul 6 23:09:27.970591 systemd-logind[1948]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:09:27.973186 systemd-logind[1948]: Removed session 7. Jul 6 23:09:28.452652 systemd[1]: Created slice kubepods-besteffort-podfbab829e_4a5e_499f_a730_41da27027dc0.slice - libcontainer container kubepods-besteffort-podfbab829e_4a5e_499f_a730_41da27027dc0.slice. Jul 6 23:09:28.485865 systemd[1]: Created slice kubepods-burstable-podafd80e14_49fb_453c_b48a_d91871c2898b.slice - libcontainer container kubepods-burstable-podafd80e14_49fb_453c_b48a_d91871c2898b.slice. Jul 6 23:09:28.498669 kubelet[3235]: W0706 23:09:28.498624 3235 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-22-108" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-108' and this object Jul 6 23:09:28.503707 kubelet[3235]: W0706 23:09:28.500743 3235 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-22-108" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-108' and this object Jul 6 23:09:28.503888 kubelet[3235]: E0706 23:09:28.503769 3235 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-22-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-108' and this object" logger="UnhandledError" Jul 6 23:09:28.504017 kubelet[3235]: E0706 23:09:28.503875 3235 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-22-108\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-108' and this object" logger="UnhandledError" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524657 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-hostproc\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524727 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cni-path\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524769 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-hubble-tls\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524804 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-etc-cni-netd\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524843 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-config-path\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.525640 kubelet[3235]: I0706 23:09:28.524877 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbab829e-4a5e-499f-a730-41da27027dc0-kube-proxy\") pod \"kube-proxy-49pbm\" (UID: \"fbab829e-4a5e-499f-a730-41da27027dc0\") " pod="kube-system/kube-proxy-49pbm" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.524917 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-lib-modules\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.524951 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-run\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.524992 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-bpf-maps\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.525027 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-net\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.525066 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbab829e-4a5e-499f-a730-41da27027dc0-lib-modules\") pod \"kube-proxy-49pbm\" (UID: \"fbab829e-4a5e-499f-a730-41da27027dc0\") " pod="kube-system/kube-proxy-49pbm" Jul 6 23:09:28.526071 kubelet[3235]: I0706 23:09:28.525103 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-kernel\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526431 kubelet[3235]: I0706 23:09:28.525141 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r866m\" (UniqueName: \"kubernetes.io/projected/fbab829e-4a5e-499f-a730-41da27027dc0-kube-api-access-r866m\") pod \"kube-proxy-49pbm\" (UID: \"fbab829e-4a5e-499f-a730-41da27027dc0\") " pod="kube-system/kube-proxy-49pbm" Jul 6 23:09:28.526431 kubelet[3235]: I0706 23:09:28.525182 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-xtables-lock\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526431 kubelet[3235]: I0706 23:09:28.525241 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfdtc\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-kube-api-access-vfdtc\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526431 kubelet[3235]: I0706 23:09:28.525284 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-cgroup\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.526431 kubelet[3235]: I0706 23:09:28.525327 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets\") pod \"cilium-pxmp6\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " pod="kube-system/cilium-pxmp6" Jul 6 23:09:28.527556 kubelet[3235]: I0706 23:09:28.525365 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbab829e-4a5e-499f-a730-41da27027dc0-xtables-lock\") pod \"kube-proxy-49pbm\" (UID: \"fbab829e-4a5e-499f-a730-41da27027dc0\") " pod="kube-system/kube-proxy-49pbm" Jul 6 23:09:28.763423 kubelet[3235]: I0706 23:09:28.762341 3235 status_manager.go:890] "Failed to get status for pod" podUID="7f72c556-8d05-4ec2-a1e6-c866190ea1d6" pod="kube-system/cilium-operator-6c4d7847fc-fwpzh" err="pods \"cilium-operator-6c4d7847fc-fwpzh\" is forbidden: User \"system:node:ip-172-31-22-108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-22-108' and this object" Jul 6 23:09:28.769175 containerd[1983]: time="2025-07-06T23:09:28.769026839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49pbm,Uid:fbab829e-4a5e-499f-a730-41da27027dc0,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:28.775771 systemd[1]: Created slice kubepods-besteffort-pod7f72c556_8d05_4ec2_a1e6_c866190ea1d6.slice - libcontainer container kubepods-besteffort-pod7f72c556_8d05_4ec2_a1e6_c866190ea1d6.slice. Jul 6 23:09:28.827971 kubelet[3235]: I0706 23:09:28.827915 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fwpzh\" (UID: \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\") " pod="kube-system/cilium-operator-6c4d7847fc-fwpzh" Jul 6 23:09:28.828876 kubelet[3235]: I0706 23:09:28.828676 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-kube-api-access-cvh66\") pod \"cilium-operator-6c4d7847fc-fwpzh\" (UID: \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\") " pod="kube-system/cilium-operator-6c4d7847fc-fwpzh" Jul 6 23:09:28.837188 containerd[1983]: time="2025-07-06T23:09:28.836919440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:28.837188 containerd[1983]: time="2025-07-06T23:09:28.837034546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:28.837817 containerd[1983]: time="2025-07-06T23:09:28.837141533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:28.837817 containerd[1983]: time="2025-07-06T23:09:28.837367931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:28.872796 systemd[1]: Started cri-containerd-dcc897fb72afae79b743250e7de8ba7e2e82abc24ef4e689feb49605c7abbd59.scope - libcontainer container dcc897fb72afae79b743250e7de8ba7e2e82abc24ef4e689feb49605c7abbd59. Jul 6 23:09:28.915251 containerd[1983]: time="2025-07-06T23:09:28.915138083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49pbm,Uid:fbab829e-4a5e-499f-a730-41da27027dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc897fb72afae79b743250e7de8ba7e2e82abc24ef4e689feb49605c7abbd59\"" Jul 6 23:09:28.922787 containerd[1983]: time="2025-07-06T23:09:28.922613811Z" level=info msg="CreateContainer within sandbox \"dcc897fb72afae79b743250e7de8ba7e2e82abc24ef4e689feb49605c7abbd59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:09:28.956195 containerd[1983]: time="2025-07-06T23:09:28.956078175Z" level=info msg="CreateContainer within sandbox \"dcc897fb72afae79b743250e7de8ba7e2e82abc24ef4e689feb49605c7abbd59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95ee4bcca633c1913284c3a4cdd541bdefba082f937c9d1c3bfe1c2d3c14e889\"" Jul 6 23:09:28.957690 containerd[1983]: time="2025-07-06T23:09:28.956929881Z" level=info msg="StartContainer for \"95ee4bcca633c1913284c3a4cdd541bdefba082f937c9d1c3bfe1c2d3c14e889\"" Jul 6 23:09:29.040982 systemd[1]: Started cri-containerd-95ee4bcca633c1913284c3a4cdd541bdefba082f937c9d1c3bfe1c2d3c14e889.scope - libcontainer container 95ee4bcca633c1913284c3a4cdd541bdefba082f937c9d1c3bfe1c2d3c14e889. Jul 6 23:09:29.122097 containerd[1983]: time="2025-07-06T23:09:29.121942110Z" level=info msg="StartContainer for \"95ee4bcca633c1913284c3a4cdd541bdefba082f937c9d1c3bfe1c2d3c14e889\" returns successfully" Jul 6 23:09:29.386069 containerd[1983]: time="2025-07-06T23:09:29.385412702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fwpzh,Uid:7f72c556-8d05-4ec2-a1e6-c866190ea1d6,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:29.440335 containerd[1983]: time="2025-07-06T23:09:29.439884040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:29.440335 containerd[1983]: time="2025-07-06T23:09:29.440006139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:29.440335 containerd[1983]: time="2025-07-06T23:09:29.440052592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:29.440335 containerd[1983]: time="2025-07-06T23:09:29.440256190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:29.470772 systemd[1]: Started cri-containerd-59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba.scope - libcontainer container 59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba. Jul 6 23:09:29.547193 containerd[1983]: time="2025-07-06T23:09:29.547011210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fwpzh,Uid:7f72c556-8d05-4ec2-a1e6-c866190ea1d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\"" Jul 6 23:09:29.552874 containerd[1983]: time="2025-07-06T23:09:29.552531605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:09:29.627618 kubelet[3235]: E0706 23:09:29.627552 3235 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 6 23:09:29.627787 kubelet[3235]: E0706 23:09:29.627702 3235 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets podName:afd80e14-49fb-453c-b48a-d91871c2898b nodeName:}" failed. No retries permitted until 2025-07-06 23:09:30.127667946 +0000 UTC m=+6.897991896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets") pod "cilium-pxmp6" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:09:29.646676 update_engine[1951]: I20250706 23:09:29.645500 1951 update_attempter.cc:509] Updating boot flags... Jul 6 23:09:29.748906 kubelet[3235]: I0706 23:09:29.748763 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49pbm" podStartSLOduration=1.748733991 podStartE2EDuration="1.748733991s" podCreationTimestamp="2025-07-06 23:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:09:29.745418334 +0000 UTC m=+6.515742272" watchObservedRunningTime="2025-07-06 23:09:29.748733991 +0000 UTC m=+6.519057941" Jul 6 23:09:29.809587 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3538) Jul 6 23:09:30.202520 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3521) Jul 6 23:09:30.299214 containerd[1983]: time="2025-07-06T23:09:30.298686323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxmp6,Uid:afd80e14-49fb-453c-b48a-d91871c2898b,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:30.463002 containerd[1983]: time="2025-07-06T23:09:30.461674613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:09:30.463002 containerd[1983]: time="2025-07-06T23:09:30.461783746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:09:30.463002 containerd[1983]: time="2025-07-06T23:09:30.461824418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:30.463002 containerd[1983]: time="2025-07-06T23:09:30.461976334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:09:30.552987 systemd[1]: Started cri-containerd-d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613.scope - libcontainer container d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613. Jul 6 23:09:30.615440 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3521) Jul 6 23:09:30.664491 containerd[1983]: time="2025-07-06T23:09:30.664304476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxmp6,Uid:afd80e14-49fb-453c-b48a-d91871c2898b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\"" Jul 6 23:09:30.943428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254315627.mount: Deactivated successfully. Jul 6 23:09:31.589939 containerd[1983]: time="2025-07-06T23:09:31.589875161Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:31.591977 containerd[1983]: time="2025-07-06T23:09:31.591911131Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:09:31.592394 containerd[1983]: time="2025-07-06T23:09:31.592322897Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:31.595140 containerd[1983]: time="2025-07-06T23:09:31.594873073Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.042276101s" Jul 6 23:09:31.595140 containerd[1983]: time="2025-07-06T23:09:31.594934351Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:09:31.598399 containerd[1983]: time="2025-07-06T23:09:31.598129516Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:09:31.600191 containerd[1983]: time="2025-07-06T23:09:31.600140538Z" level=info msg="CreateContainer within sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:09:31.624282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530240476.mount: Deactivated successfully. Jul 6 23:09:31.629788 containerd[1983]: time="2025-07-06T23:09:31.629618144Z" level=info msg="CreateContainer within sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\"" Jul 6 23:09:31.631530 containerd[1983]: time="2025-07-06T23:09:31.631221814Z" level=info msg="StartContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\"" Jul 6 23:09:31.689794 systemd[1]: Started cri-containerd-d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb.scope - libcontainer container d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb. Jul 6 23:09:31.739701 containerd[1983]: time="2025-07-06T23:09:31.739522621Z" level=info msg="StartContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" returns successfully" Jul 6 23:09:34.821518 kubelet[3235]: I0706 23:09:34.818122 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fwpzh" podStartSLOduration=4.772060525 podStartE2EDuration="6.818099276s" podCreationTimestamp="2025-07-06 23:09:28 +0000 UTC" firstStartedPulling="2025-07-06 23:09:29.550677296 +0000 UTC m=+6.321001246" lastFinishedPulling="2025-07-06 23:09:31.596716047 +0000 UTC m=+8.367039997" observedRunningTime="2025-07-06 23:09:32.775187193 +0000 UTC m=+9.545511154" watchObservedRunningTime="2025-07-06 23:09:34.818099276 +0000 UTC m=+11.588423214" Jul 6 23:09:37.305860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205624012.mount: Deactivated successfully. Jul 6 23:09:39.898833 containerd[1983]: time="2025-07-06T23:09:39.898770788Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:39.901267 containerd[1983]: time="2025-07-06T23:09:39.901195459Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:09:39.903292 containerd[1983]: time="2025-07-06T23:09:39.903216076Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:39.907028 containerd[1983]: time="2025-07-06T23:09:39.906866294Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.308674565s" Jul 6 23:09:39.907028 containerd[1983]: time="2025-07-06T23:09:39.906930750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:09:39.912131 containerd[1983]: time="2025-07-06T23:09:39.912069076Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:09:39.933905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447050861.mount: Deactivated successfully. Jul 6 23:09:39.941888 containerd[1983]: time="2025-07-06T23:09:39.941809098Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\"" Jul 6 23:09:39.942622 containerd[1983]: time="2025-07-06T23:09:39.942551779Z" level=info msg="StartContainer for \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\"" Jul 6 23:09:40.002792 systemd[1]: Started cri-containerd-7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9.scope - libcontainer container 7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9. Jul 6 23:09:40.052326 containerd[1983]: time="2025-07-06T23:09:40.052163062Z" level=info msg="StartContainer for \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\" returns successfully" Jul 6 23:09:40.084963 systemd[1]: cri-containerd-7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9.scope: Deactivated successfully. Jul 6 23:09:40.928825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9-rootfs.mount: Deactivated successfully. Jul 6 23:09:41.212450 containerd[1983]: time="2025-07-06T23:09:41.212377336Z" level=info msg="shim disconnected" id=7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9 namespace=k8s.io Jul 6 23:09:41.213091 containerd[1983]: time="2025-07-06T23:09:41.213010619Z" level=warning msg="cleaning up after shim disconnected" id=7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9 namespace=k8s.io Jul 6 23:09:41.213091 containerd[1983]: time="2025-07-06T23:09:41.213061042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:09:41.233819 containerd[1983]: time="2025-07-06T23:09:41.233748269Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:09:41.762536 containerd[1983]: time="2025-07-06T23:09:41.762378520Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:09:41.792358 containerd[1983]: time="2025-07-06T23:09:41.791713865Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\"" Jul 6 23:09:41.795853 containerd[1983]: time="2025-07-06T23:09:41.794538584Z" level=info msg="StartContainer for \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\"" Jul 6 23:09:41.860845 systemd[1]: Started cri-containerd-e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42.scope - libcontainer container e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42. Jul 6 23:09:41.920736 containerd[1983]: time="2025-07-06T23:09:41.920657773Z" level=info msg="StartContainer for \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\" returns successfully" Jul 6 23:09:41.947576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:09:41.948121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:09:41.950080 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:09:41.959743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:09:41.967961 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:09:41.969110 systemd[1]: cri-containerd-e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42.scope: Deactivated successfully. Jul 6 23:09:42.005916 containerd[1983]: time="2025-07-06T23:09:42.005657548Z" level=info msg="shim disconnected" id=e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42 namespace=k8s.io Jul 6 23:09:42.005916 containerd[1983]: time="2025-07-06T23:09:42.005744720Z" level=warning msg="cleaning up after shim disconnected" id=e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42 namespace=k8s.io Jul 6 23:09:42.005916 containerd[1983]: time="2025-07-06T23:09:42.005765650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:09:42.006616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42-rootfs.mount: Deactivated successfully. Jul 6 23:09:42.034877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:09:42.046449 containerd[1983]: time="2025-07-06T23:09:42.046361778Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:09:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:09:42.768073 containerd[1983]: time="2025-07-06T23:09:42.767946379Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:09:42.811895 containerd[1983]: time="2025-07-06T23:09:42.811801385Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\"" Jul 6 23:09:42.812770 containerd[1983]: time="2025-07-06T23:09:42.812710038Z" level=info msg="StartContainer for \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\"" Jul 6 23:09:42.868843 systemd[1]: Started cri-containerd-fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d.scope - libcontainer container fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d. Jul 6 23:09:42.933966 containerd[1983]: time="2025-07-06T23:09:42.933874074Z" level=info msg="StartContainer for \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\" returns successfully" Jul 6 23:09:42.937822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1347116666.mount: Deactivated successfully. Jul 6 23:09:42.951028 systemd[1]: cri-containerd-fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d.scope: Deactivated successfully. Jul 6 23:09:42.997814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d-rootfs.mount: Deactivated successfully. Jul 6 23:09:43.001875 containerd[1983]: time="2025-07-06T23:09:43.001724576Z" level=info msg="shim disconnected" id=fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d namespace=k8s.io Jul 6 23:09:43.002073 containerd[1983]: time="2025-07-06T23:09:43.001878016Z" level=warning msg="cleaning up after shim disconnected" id=fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d namespace=k8s.io Jul 6 23:09:43.002073 containerd[1983]: time="2025-07-06T23:09:43.001904091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:09:43.771976 containerd[1983]: time="2025-07-06T23:09:43.771877597Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:09:43.803115 containerd[1983]: time="2025-07-06T23:09:43.803036882Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\"" Jul 6 23:09:43.806515 containerd[1983]: time="2025-07-06T23:09:43.805262513Z" level=info msg="StartContainer for \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\"" Jul 6 23:09:43.862793 systemd[1]: Started cri-containerd-a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b.scope - libcontainer container a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b. Jul 6 23:09:43.934125 systemd[1]: cri-containerd-a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b.scope: Deactivated successfully. Jul 6 23:09:43.941131 containerd[1983]: time="2025-07-06T23:09:43.941067565Z" level=info msg="StartContainer for \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\" returns successfully" Jul 6 23:09:43.991104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b-rootfs.mount: Deactivated successfully. Jul 6 23:09:43.996641 containerd[1983]: time="2025-07-06T23:09:43.996028918Z" level=info msg="shim disconnected" id=a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b namespace=k8s.io Jul 6 23:09:43.997300 containerd[1983]: time="2025-07-06T23:09:43.997000972Z" level=warning msg="cleaning up after shim disconnected" id=a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b namespace=k8s.io Jul 6 23:09:43.997300 containerd[1983]: time="2025-07-06T23:09:43.997043503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:09:44.787615 containerd[1983]: time="2025-07-06T23:09:44.787110318Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:09:44.839066 containerd[1983]: time="2025-07-06T23:09:44.838983049Z" level=info msg="CreateContainer within sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\"" Jul 6 23:09:44.842559 containerd[1983]: time="2025-07-06T23:09:44.840238521Z" level=info msg="StartContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\"" Jul 6 23:09:44.931830 systemd[1]: Started cri-containerd-bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4.scope - libcontainer container bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4. Jul 6 23:09:45.016126 containerd[1983]: time="2025-07-06T23:09:45.016039725Z" level=info msg="StartContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" returns successfully" Jul 6 23:09:45.166149 kubelet[3235]: I0706 23:09:45.165982 3235 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:09:45.247880 systemd[1]: Created slice kubepods-burstable-podfc3d1298_dbfa_4e21_85c9_a70db601c90c.slice - libcontainer container kubepods-burstable-podfc3d1298_dbfa_4e21_85c9_a70db601c90c.slice. Jul 6 23:09:45.261303 kubelet[3235]: I0706 23:09:45.260146 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc3d1298-dbfa-4e21-85c9-a70db601c90c-config-volume\") pod \"coredns-668d6bf9bc-6bqqv\" (UID: \"fc3d1298-dbfa-4e21-85c9-a70db601c90c\") " pod="kube-system/coredns-668d6bf9bc-6bqqv" Jul 6 23:09:45.261653 kubelet[3235]: I0706 23:09:45.261616 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnr8j\" (UniqueName: \"kubernetes.io/projected/fc3d1298-dbfa-4e21-85c9-a70db601c90c-kube-api-access-jnr8j\") pod \"coredns-668d6bf9bc-6bqqv\" (UID: \"fc3d1298-dbfa-4e21-85c9-a70db601c90c\") " pod="kube-system/coredns-668d6bf9bc-6bqqv" Jul 6 23:09:45.271575 systemd[1]: Created slice kubepods-burstable-pod372e06fd_eee4_44cc_9ec9_43f95f7a3005.slice - libcontainer container kubepods-burstable-pod372e06fd_eee4_44cc_9ec9_43f95f7a3005.slice. Jul 6 23:09:45.363514 kubelet[3235]: I0706 23:09:45.362317 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6cbh\" (UniqueName: \"kubernetes.io/projected/372e06fd-eee4-44cc-9ec9-43f95f7a3005-kube-api-access-l6cbh\") pod \"coredns-668d6bf9bc-9mxxn\" (UID: \"372e06fd-eee4-44cc-9ec9-43f95f7a3005\") " pod="kube-system/coredns-668d6bf9bc-9mxxn" Jul 6 23:09:45.363514 kubelet[3235]: I0706 23:09:45.362387 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372e06fd-eee4-44cc-9ec9-43f95f7a3005-config-volume\") pod \"coredns-668d6bf9bc-9mxxn\" (UID: \"372e06fd-eee4-44cc-9ec9-43f95f7a3005\") " pod="kube-system/coredns-668d6bf9bc-9mxxn" Jul 6 23:09:45.560927 containerd[1983]: time="2025-07-06T23:09:45.560871830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6bqqv,Uid:fc3d1298-dbfa-4e21-85c9-a70db601c90c,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:45.592890 containerd[1983]: time="2025-07-06T23:09:45.591720482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9mxxn,Uid:372e06fd-eee4-44cc-9ec9-43f95f7a3005,Namespace:kube-system,Attempt:0,}" Jul 6 23:09:48.174803 (udev-worker)[4308]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:09:48.175126 systemd-networkd[1879]: cilium_host: Link UP Jul 6 23:09:48.178369 (udev-worker)[4306]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:09:48.178737 systemd-networkd[1879]: cilium_net: Link UP Jul 6 23:09:48.179269 systemd-networkd[1879]: cilium_net: Gained carrier Jul 6 23:09:48.181227 systemd-networkd[1879]: cilium_host: Gained carrier Jul 6 23:09:48.187006 systemd-networkd[1879]: cilium_host: Gained IPv6LL Jul 6 23:09:48.415121 (udev-worker)[4349]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:09:48.427451 systemd-networkd[1879]: cilium_vxlan: Link UP Jul 6 23:09:48.427512 systemd-networkd[1879]: cilium_vxlan: Gained carrier Jul 6 23:09:48.931853 systemd-networkd[1879]: cilium_net: Gained IPv6LL Jul 6 23:09:49.020516 kernel: NET: Registered PF_ALG protocol family Jul 6 23:09:50.020878 systemd-networkd[1879]: cilium_vxlan: Gained IPv6LL Jul 6 23:09:50.377181 (udev-worker)[4350]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:09:50.386383 systemd-networkd[1879]: lxc_health: Link UP Jul 6 23:09:50.387038 systemd-networkd[1879]: lxc_health: Gained carrier Jul 6 23:09:50.709544 kernel: eth0: renamed from tmp28f4b Jul 6 23:09:50.718873 systemd-networkd[1879]: lxc56cedf1a3afb: Link UP Jul 6 23:09:50.720172 systemd-networkd[1879]: lxc56cedf1a3afb: Gained carrier Jul 6 23:09:51.177548 kernel: eth0: renamed from tmp7ba76 Jul 6 23:09:51.183402 systemd-networkd[1879]: lxcb53fcf7ac0dd: Link UP Jul 6 23:09:51.191385 systemd-networkd[1879]: lxcb53fcf7ac0dd: Gained carrier Jul 6 23:09:52.067886 systemd-networkd[1879]: lxc_health: Gained IPv6LL Jul 6 23:09:52.195840 systemd-networkd[1879]: lxc56cedf1a3afb: Gained IPv6LL Jul 6 23:09:52.259780 systemd-networkd[1879]: lxcb53fcf7ac0dd: Gained IPv6LL Jul 6 23:09:52.342365 kubelet[3235]: I0706 23:09:52.341259 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pxmp6" podStartSLOduration=15.099285836 podStartE2EDuration="24.341234881s" podCreationTimestamp="2025-07-06 23:09:28 +0000 UTC" firstStartedPulling="2025-07-06 23:09:30.666824103 +0000 UTC m=+7.437148053" lastFinishedPulling="2025-07-06 23:09:39.908773148 +0000 UTC m=+16.679097098" observedRunningTime="2025-07-06 23:09:45.874728544 +0000 UTC m=+22.645052506" watchObservedRunningTime="2025-07-06 23:09:52.341234881 +0000 UTC m=+29.111558831" Jul 6 23:09:54.757306 ntpd[1942]: Listen normally on 7 cilium_host 192.168.0.94:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 7 cilium_host 192.168.0.94:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 8 cilium_net [fe80::a44d:dcff:fe7a:cd4b%4]:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 9 cilium_host [fe80::2461:e1ff:fe64:4e92%5]:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 10 cilium_vxlan [fe80::fc61:dfff:fe61:ed74%6]:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 11 lxc_health [fe80::1c1e:5fff:fe10:961%8]:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 12 lxc56cedf1a3afb [fe80::9466:2dff:fea3:ecd8%10]:123 Jul 6 23:09:54.758644 ntpd[1942]: 6 Jul 23:09:54 ntpd[1942]: Listen normally on 13 lxcb53fcf7ac0dd [fe80::480c:c8ff:fe54:fae9%12]:123 Jul 6 23:09:54.757505 ntpd[1942]: Listen normally on 8 cilium_net [fe80::a44d:dcff:fe7a:cd4b%4]:123 Jul 6 23:09:54.757627 ntpd[1942]: Listen normally on 9 cilium_host [fe80::2461:e1ff:fe64:4e92%5]:123 Jul 6 23:09:54.757706 ntpd[1942]: Listen normally on 10 cilium_vxlan [fe80::fc61:dfff:fe61:ed74%6]:123 Jul 6 23:09:54.757781 ntpd[1942]: Listen normally on 11 lxc_health [fe80::1c1e:5fff:fe10:961%8]:123 Jul 6 23:09:54.757853 ntpd[1942]: Listen normally on 12 lxc56cedf1a3afb [fe80::9466:2dff:fea3:ecd8%10]:123 Jul 6 23:09:54.757928 ntpd[1942]: Listen normally on 13 lxcb53fcf7ac0dd [fe80::480c:c8ff:fe54:fae9%12]:123 Jul 6 23:10:00.487572 containerd[1983]: time="2025-07-06T23:10:00.486888185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:00.488244 containerd[1983]: time="2025-07-06T23:10:00.487654314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:00.490553 containerd[1983]: time="2025-07-06T23:10:00.489707699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:00.490553 containerd[1983]: time="2025-07-06T23:10:00.490330775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:00.543790 containerd[1983]: time="2025-07-06T23:10:00.541541365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:00.543790 containerd[1983]: time="2025-07-06T23:10:00.541679596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:00.543790 containerd[1983]: time="2025-07-06T23:10:00.541721227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:00.546007 containerd[1983]: time="2025-07-06T23:10:00.545680770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:00.603802 systemd[1]: Started cri-containerd-28f4bfd3e3cd2387c7aa8d553ae7b54511f20363bd59a7c91bae150b0599a992.scope - libcontainer container 28f4bfd3e3cd2387c7aa8d553ae7b54511f20363bd59a7c91bae150b0599a992. Jul 6 23:10:00.627223 systemd[1]: Started cri-containerd-7ba76d323f12718a30947390ec6462f326737b1fc8427feaba9114766419358e.scope - libcontainer container 7ba76d323f12718a30947390ec6462f326737b1fc8427feaba9114766419358e. Jul 6 23:10:00.755687 containerd[1983]: time="2025-07-06T23:10:00.755449222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9mxxn,Uid:372e06fd-eee4-44cc-9ec9-43f95f7a3005,Namespace:kube-system,Attempt:0,} returns sandbox id \"28f4bfd3e3cd2387c7aa8d553ae7b54511f20363bd59a7c91bae150b0599a992\"" Jul 6 23:10:00.769611 containerd[1983]: time="2025-07-06T23:10:00.767752607Z" level=info msg="CreateContainer within sandbox \"28f4bfd3e3cd2387c7aa8d553ae7b54511f20363bd59a7c91bae150b0599a992\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:10:00.798537 containerd[1983]: time="2025-07-06T23:10:00.795941002Z" level=info msg="CreateContainer within sandbox \"28f4bfd3e3cd2387c7aa8d553ae7b54511f20363bd59a7c91bae150b0599a992\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a784003d58ab83715e5183812abf73d67b54bc301ffac861f4223b56aa0176ab\"" Jul 6 23:10:00.798537 containerd[1983]: time="2025-07-06T23:10:00.798079664Z" level=info msg="StartContainer for \"a784003d58ab83715e5183812abf73d67b54bc301ffac861f4223b56aa0176ab\"" Jul 6 23:10:00.875420 containerd[1983]: time="2025-07-06T23:10:00.875235603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6bqqv,Uid:fc3d1298-dbfa-4e21-85c9-a70db601c90c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ba76d323f12718a30947390ec6462f326737b1fc8427feaba9114766419358e\"" Jul 6 23:10:00.890823 containerd[1983]: time="2025-07-06T23:10:00.890732331Z" level=info msg="CreateContainer within sandbox \"7ba76d323f12718a30947390ec6462f326737b1fc8427feaba9114766419358e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:10:00.915816 systemd[1]: Started cri-containerd-a784003d58ab83715e5183812abf73d67b54bc301ffac861f4223b56aa0176ab.scope - libcontainer container a784003d58ab83715e5183812abf73d67b54bc301ffac861f4223b56aa0176ab. Jul 6 23:10:00.935592 containerd[1983]: time="2025-07-06T23:10:00.935416962Z" level=info msg="CreateContainer within sandbox \"7ba76d323f12718a30947390ec6462f326737b1fc8427feaba9114766419358e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"289a737ec67ed32cf7fc9b791b3a55fae2276e629a5a7fadd9e5881eef9dc30c\"" Jul 6 23:10:00.938860 containerd[1983]: time="2025-07-06T23:10:00.938796007Z" level=info msg="StartContainer for \"289a737ec67ed32cf7fc9b791b3a55fae2276e629a5a7fadd9e5881eef9dc30c\"" Jul 6 23:10:01.014871 systemd[1]: Started cri-containerd-289a737ec67ed32cf7fc9b791b3a55fae2276e629a5a7fadd9e5881eef9dc30c.scope - libcontainer container 289a737ec67ed32cf7fc9b791b3a55fae2276e629a5a7fadd9e5881eef9dc30c. Jul 6 23:10:01.116957 containerd[1983]: time="2025-07-06T23:10:01.116873037Z" level=info msg="StartContainer for \"a784003d58ab83715e5183812abf73d67b54bc301ffac861f4223b56aa0176ab\" returns successfully" Jul 6 23:10:01.176874 containerd[1983]: time="2025-07-06T23:10:01.176646156Z" level=info msg="StartContainer for \"289a737ec67ed32cf7fc9b791b3a55fae2276e629a5a7fadd9e5881eef9dc30c\" returns successfully" Jul 6 23:10:01.901003 kubelet[3235]: I0706 23:10:01.900890 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9mxxn" podStartSLOduration=33.900322007 podStartE2EDuration="33.900322007s" podCreationTimestamp="2025-07-06 23:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:01.896026416 +0000 UTC m=+38.666350390" watchObservedRunningTime="2025-07-06 23:10:01.900322007 +0000 UTC m=+38.670645957" Jul 6 23:10:01.928027 systemd[1]: Started sshd@7-172.31.22.108:22-147.75.109.163:60370.service - OpenSSH per-connection server daemon (147.75.109.163:60370). Jul 6 23:10:01.939560 kubelet[3235]: I0706 23:10:01.938300 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6bqqv" podStartSLOduration=33.938273866 podStartE2EDuration="33.938273866s" podCreationTimestamp="2025-07-06 23:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:01.937781608 +0000 UTC m=+38.708105570" watchObservedRunningTime="2025-07-06 23:10:01.938273866 +0000 UTC m=+38.708597803" Jul 6 23:10:02.147159 sshd[4880]: Accepted publickey for core from 147.75.109.163 port 60370 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:02.151002 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:02.163442 systemd-logind[1948]: New session 8 of user core. Jul 6 23:10:02.170819 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:10:02.462563 sshd[4888]: Connection closed by 147.75.109.163 port 60370 Jul 6 23:10:02.463511 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:02.470734 systemd[1]: sshd@7-172.31.22.108:22-147.75.109.163:60370.service: Deactivated successfully. Jul 6 23:10:02.475024 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:10:02.477226 systemd-logind[1948]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:10:02.480529 systemd-logind[1948]: Removed session 8. Jul 6 23:10:07.507009 systemd[1]: Started sshd@8-172.31.22.108:22-147.75.109.163:34850.service - OpenSSH per-connection server daemon (147.75.109.163:34850). Jul 6 23:10:07.689262 sshd[4902]: Accepted publickey for core from 147.75.109.163 port 34850 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:07.692243 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:07.702228 systemd-logind[1948]: New session 9 of user core. Jul 6 23:10:07.708857 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:10:07.966401 sshd[4904]: Connection closed by 147.75.109.163 port 34850 Jul 6 23:10:07.967605 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:07.974619 systemd[1]: sshd@8-172.31.22.108:22-147.75.109.163:34850.service: Deactivated successfully. Jul 6 23:10:07.979389 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:10:07.981826 systemd-logind[1948]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:10:07.984648 systemd-logind[1948]: Removed session 9. Jul 6 23:10:13.011059 systemd[1]: Started sshd@9-172.31.22.108:22-147.75.109.163:34860.service - OpenSSH per-connection server daemon (147.75.109.163:34860). Jul 6 23:10:13.202267 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 34860 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:13.204837 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:13.214800 systemd-logind[1948]: New session 10 of user core. Jul 6 23:10:13.221810 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:10:13.490497 sshd[4918]: Connection closed by 147.75.109.163 port 34860 Jul 6 23:10:13.491425 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:13.498736 systemd-logind[1948]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:10:13.499243 systemd[1]: sshd@9-172.31.22.108:22-147.75.109.163:34860.service: Deactivated successfully. Jul 6 23:10:13.502904 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:10:13.508563 systemd-logind[1948]: Removed session 10. Jul 6 23:10:18.532002 systemd[1]: Started sshd@10-172.31.22.108:22-147.75.109.163:37134.service - OpenSSH per-connection server daemon (147.75.109.163:37134). Jul 6 23:10:18.717393 sshd[4931]: Accepted publickey for core from 147.75.109.163 port 37134 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:18.720680 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:18.730580 systemd-logind[1948]: New session 11 of user core. Jul 6 23:10:18.738834 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:10:18.990915 sshd[4933]: Connection closed by 147.75.109.163 port 37134 Jul 6 23:10:18.992071 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:18.997160 systemd[1]: sshd@10-172.31.22.108:22-147.75.109.163:37134.service: Deactivated successfully. Jul 6 23:10:19.002236 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:10:19.006282 systemd-logind[1948]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:10:19.008623 systemd-logind[1948]: Removed session 11. Jul 6 23:10:19.031088 systemd[1]: Started sshd@11-172.31.22.108:22-147.75.109.163:37140.service - OpenSSH per-connection server daemon (147.75.109.163:37140). Jul 6 23:10:19.221399 sshd[4946]: Accepted publickey for core from 147.75.109.163 port 37140 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:19.223939 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:19.233624 systemd-logind[1948]: New session 12 of user core. Jul 6 23:10:19.242753 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:10:19.570028 sshd[4948]: Connection closed by 147.75.109.163 port 37140 Jul 6 23:10:19.570986 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:19.579019 systemd[1]: sshd@11-172.31.22.108:22-147.75.109.163:37140.service: Deactivated successfully. Jul 6 23:10:19.586856 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:10:19.591371 systemd-logind[1948]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:10:19.622033 systemd[1]: Started sshd@12-172.31.22.108:22-147.75.109.163:37156.service - OpenSSH per-connection server daemon (147.75.109.163:37156). Jul 6 23:10:19.624338 systemd-logind[1948]: Removed session 12. Jul 6 23:10:19.815093 sshd[4957]: Accepted publickey for core from 147.75.109.163 port 37156 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:19.817539 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:19.825774 systemd-logind[1948]: New session 13 of user core. Jul 6 23:10:19.835834 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:10:20.094741 sshd[4960]: Connection closed by 147.75.109.163 port 37156 Jul 6 23:10:20.095924 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:20.101636 systemd-logind[1948]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:10:20.102921 systemd[1]: sshd@12-172.31.22.108:22-147.75.109.163:37156.service: Deactivated successfully. Jul 6 23:10:20.106196 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:10:20.113006 systemd-logind[1948]: Removed session 13. Jul 6 23:10:25.138042 systemd[1]: Started sshd@13-172.31.22.108:22-147.75.109.163:37162.service - OpenSSH per-connection server daemon (147.75.109.163:37162). Jul 6 23:10:25.325955 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 37162 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:25.328504 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:25.338558 systemd-logind[1948]: New session 14 of user core. Jul 6 23:10:25.343899 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:10:25.593526 sshd[4978]: Connection closed by 147.75.109.163 port 37162 Jul 6 23:10:25.594588 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:25.601144 systemd[1]: sshd@13-172.31.22.108:22-147.75.109.163:37162.service: Deactivated successfully. Jul 6 23:10:25.606675 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:10:25.609637 systemd-logind[1948]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:10:25.611878 systemd-logind[1948]: Removed session 14. Jul 6 23:10:30.633017 systemd[1]: Started sshd@14-172.31.22.108:22-147.75.109.163:44044.service - OpenSSH per-connection server daemon (147.75.109.163:44044). Jul 6 23:10:30.811069 sshd[4992]: Accepted publickey for core from 147.75.109.163 port 44044 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:30.813597 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:30.823193 systemd-logind[1948]: New session 15 of user core. Jul 6 23:10:30.832780 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:10:31.078309 sshd[4994]: Connection closed by 147.75.109.163 port 44044 Jul 6 23:10:31.079375 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:31.086119 systemd[1]: sshd@14-172.31.22.108:22-147.75.109.163:44044.service: Deactivated successfully. Jul 6 23:10:31.090457 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:10:31.092546 systemd-logind[1948]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:10:31.094431 systemd-logind[1948]: Removed session 15. Jul 6 23:10:36.120990 systemd[1]: Started sshd@15-172.31.22.108:22-147.75.109.163:60906.service - OpenSSH per-connection server daemon (147.75.109.163:60906). Jul 6 23:10:36.302147 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 60906 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:36.304683 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:36.314441 systemd-logind[1948]: New session 16 of user core. Jul 6 23:10:36.325735 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:10:36.568295 sshd[5007]: Connection closed by 147.75.109.163 port 60906 Jul 6 23:10:36.569393 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:36.577397 systemd[1]: sshd@15-172.31.22.108:22-147.75.109.163:60906.service: Deactivated successfully. Jul 6 23:10:36.581178 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:10:36.583285 systemd-logind[1948]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:10:36.585554 systemd-logind[1948]: Removed session 16. Jul 6 23:10:41.611019 systemd[1]: Started sshd@16-172.31.22.108:22-147.75.109.163:60922.service - OpenSSH per-connection server daemon (147.75.109.163:60922). Jul 6 23:10:41.793148 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 60922 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:41.795706 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:41.804678 systemd-logind[1948]: New session 17 of user core. Jul 6 23:10:41.815766 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:10:42.059033 sshd[5021]: Connection closed by 147.75.109.163 port 60922 Jul 6 23:10:42.060138 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:42.066935 systemd[1]: sshd@16-172.31.22.108:22-147.75.109.163:60922.service: Deactivated successfully. Jul 6 23:10:42.070203 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:10:42.073004 systemd-logind[1948]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:10:42.075208 systemd-logind[1948]: Removed session 17. Jul 6 23:10:42.101022 systemd[1]: Started sshd@17-172.31.22.108:22-147.75.109.163:60936.service - OpenSSH per-connection server daemon (147.75.109.163:60936). Jul 6 23:10:42.290905 sshd[5033]: Accepted publickey for core from 147.75.109.163 port 60936 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:42.293587 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:42.302926 systemd-logind[1948]: New session 18 of user core. Jul 6 23:10:42.307735 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:10:42.640387 sshd[5035]: Connection closed by 147.75.109.163 port 60936 Jul 6 23:10:42.639751 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:42.648819 systemd[1]: sshd@17-172.31.22.108:22-147.75.109.163:60936.service: Deactivated successfully. Jul 6 23:10:42.656171 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:10:42.659799 systemd-logind[1948]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:10:42.691097 systemd[1]: Started sshd@18-172.31.22.108:22-147.75.109.163:60946.service - OpenSSH per-connection server daemon (147.75.109.163:60946). Jul 6 23:10:42.694299 systemd-logind[1948]: Removed session 18. Jul 6 23:10:42.873318 sshd[5044]: Accepted publickey for core from 147.75.109.163 port 60946 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:42.876024 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:42.884597 systemd-logind[1948]: New session 19 of user core. Jul 6 23:10:42.892776 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:10:44.078228 sshd[5047]: Connection closed by 147.75.109.163 port 60946 Jul 6 23:10:44.079650 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:44.088295 systemd[1]: sshd@18-172.31.22.108:22-147.75.109.163:60946.service: Deactivated successfully. Jul 6 23:10:44.094235 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:10:44.105201 systemd-logind[1948]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:10:44.146659 systemd[1]: Started sshd@19-172.31.22.108:22-147.75.109.163:60962.service - OpenSSH per-connection server daemon (147.75.109.163:60962). Jul 6 23:10:44.148595 systemd-logind[1948]: Removed session 19. Jul 6 23:10:44.341249 sshd[5063]: Accepted publickey for core from 147.75.109.163 port 60962 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:44.344055 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:44.357069 systemd-logind[1948]: New session 20 of user core. Jul 6 23:10:44.369744 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:10:44.874154 sshd[5066]: Connection closed by 147.75.109.163 port 60962 Jul 6 23:10:44.876024 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:44.882842 systemd[1]: sshd@19-172.31.22.108:22-147.75.109.163:60962.service: Deactivated successfully. Jul 6 23:10:44.886419 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:10:44.889548 systemd-logind[1948]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:10:44.891432 systemd-logind[1948]: Removed session 20. Jul 6 23:10:44.913989 systemd[1]: Started sshd@20-172.31.22.108:22-147.75.109.163:60976.service - OpenSSH per-connection server daemon (147.75.109.163:60976). Jul 6 23:10:45.108808 sshd[5076]: Accepted publickey for core from 147.75.109.163 port 60976 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:45.111321 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:45.120630 systemd-logind[1948]: New session 21 of user core. Jul 6 23:10:45.126761 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:10:45.380835 sshd[5078]: Connection closed by 147.75.109.163 port 60976 Jul 6 23:10:45.381675 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:45.388134 systemd[1]: sshd@20-172.31.22.108:22-147.75.109.163:60976.service: Deactivated successfully. Jul 6 23:10:45.392757 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:10:45.394560 systemd-logind[1948]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:10:45.397635 systemd-logind[1948]: Removed session 21. Jul 6 23:10:50.422024 systemd[1]: Started sshd@21-172.31.22.108:22-147.75.109.163:48140.service - OpenSSH per-connection server daemon (147.75.109.163:48140). Jul 6 23:10:50.619600 sshd[5091]: Accepted publickey for core from 147.75.109.163 port 48140 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:50.622322 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:50.631524 systemd-logind[1948]: New session 22 of user core. Jul 6 23:10:50.638789 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:10:50.882545 sshd[5093]: Connection closed by 147.75.109.163 port 48140 Jul 6 23:10:50.883416 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:50.890248 systemd[1]: sshd@21-172.31.22.108:22-147.75.109.163:48140.service: Deactivated successfully. Jul 6 23:10:50.894742 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:10:50.897584 systemd-logind[1948]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:10:50.900525 systemd-logind[1948]: Removed session 22. Jul 6 23:10:55.926053 systemd[1]: Started sshd@22-172.31.22.108:22-147.75.109.163:48152.service - OpenSSH per-connection server daemon (147.75.109.163:48152). Jul 6 23:10:56.133305 sshd[5108]: Accepted publickey for core from 147.75.109.163 port 48152 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:10:56.135760 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:56.145015 systemd-logind[1948]: New session 23 of user core. Jul 6 23:10:56.161780 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:10:56.415486 sshd[5110]: Connection closed by 147.75.109.163 port 48152 Jul 6 23:10:56.416385 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:56.422326 systemd[1]: sshd@22-172.31.22.108:22-147.75.109.163:48152.service: Deactivated successfully. Jul 6 23:10:56.426445 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:10:56.429269 systemd-logind[1948]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:10:56.431247 systemd-logind[1948]: Removed session 23. Jul 6 23:11:01.458036 systemd[1]: Started sshd@23-172.31.22.108:22-147.75.109.163:42874.service - OpenSSH per-connection server daemon (147.75.109.163:42874). Jul 6 23:11:01.650775 sshd[5125]: Accepted publickey for core from 147.75.109.163 port 42874 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:01.654380 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:01.663535 systemd-logind[1948]: New session 24 of user core. Jul 6 23:11:01.673759 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:11:01.921399 sshd[5127]: Connection closed by 147.75.109.163 port 42874 Jul 6 23:11:01.922677 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:01.930443 systemd-logind[1948]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:11:01.931354 systemd[1]: sshd@23-172.31.22.108:22-147.75.109.163:42874.service: Deactivated successfully. Jul 6 23:11:01.936104 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:11:01.940351 systemd-logind[1948]: Removed session 24. Jul 6 23:11:06.966986 systemd[1]: Started sshd@24-172.31.22.108:22-147.75.109.163:33656.service - OpenSSH per-connection server daemon (147.75.109.163:33656). Jul 6 23:11:07.144959 sshd[5139]: Accepted publickey for core from 147.75.109.163 port 33656 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:07.147617 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:07.156204 systemd-logind[1948]: New session 25 of user core. Jul 6 23:11:07.165774 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:11:07.408210 sshd[5141]: Connection closed by 147.75.109.163 port 33656 Jul 6 23:11:07.409095 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:07.416186 systemd[1]: sshd@24-172.31.22.108:22-147.75.109.163:33656.service: Deactivated successfully. Jul 6 23:11:07.422425 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:11:07.424049 systemd-logind[1948]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:11:07.426415 systemd-logind[1948]: Removed session 25. Jul 6 23:11:07.447021 systemd[1]: Started sshd@25-172.31.22.108:22-147.75.109.163:33662.service - OpenSSH per-connection server daemon (147.75.109.163:33662). Jul 6 23:11:07.638294 sshd[5153]: Accepted publickey for core from 147.75.109.163 port 33662 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:07.642159 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:07.653063 systemd-logind[1948]: New session 26 of user core. Jul 6 23:11:07.659773 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:11:10.092252 containerd[1983]: time="2025-07-06T23:11:10.092161859Z" level=info msg="StopContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" with timeout 30 (s)" Jul 6 23:11:10.098574 containerd[1983]: time="2025-07-06T23:11:10.094823267Z" level=info msg="Stop container \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" with signal terminated" Jul 6 23:11:10.132840 systemd[1]: cri-containerd-d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb.scope: Deactivated successfully. Jul 6 23:11:10.138629 containerd[1983]: time="2025-07-06T23:11:10.138566783Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:11:10.155543 containerd[1983]: time="2025-07-06T23:11:10.155438423Z" level=info msg="StopContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" with timeout 2 (s)" Jul 6 23:11:10.156169 containerd[1983]: time="2025-07-06T23:11:10.156131423Z" level=info msg="Stop container \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" with signal terminated" Jul 6 23:11:10.174704 systemd-networkd[1879]: lxc_health: Link DOWN Jul 6 23:11:10.174724 systemd-networkd[1879]: lxc_health: Lost carrier Jul 6 23:11:10.198395 systemd[1]: cri-containerd-bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4.scope: Deactivated successfully. Jul 6 23:11:10.200349 systemd[1]: cri-containerd-bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4.scope: Consumed 16.224s CPU time, 125.5M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:11:10.220014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb-rootfs.mount: Deactivated successfully. Jul 6 23:11:10.235044 containerd[1983]: time="2025-07-06T23:11:10.234970511Z" level=info msg="shim disconnected" id=d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb namespace=k8s.io Jul 6 23:11:10.235811 containerd[1983]: time="2025-07-06T23:11:10.235571255Z" level=warning msg="cleaning up after shim disconnected" id=d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb namespace=k8s.io Jul 6 23:11:10.235811 containerd[1983]: time="2025-07-06T23:11:10.235660547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:10.264160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4-rootfs.mount: Deactivated successfully. Jul 6 23:11:10.272760 containerd[1983]: time="2025-07-06T23:11:10.272675748Z" level=info msg="shim disconnected" id=bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4 namespace=k8s.io Jul 6 23:11:10.272760 containerd[1983]: time="2025-07-06T23:11:10.272758044Z" level=warning msg="cleaning up after shim disconnected" id=bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4 namespace=k8s.io Jul 6 23:11:10.273086 containerd[1983]: time="2025-07-06T23:11:10.272779392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:10.276774 containerd[1983]: time="2025-07-06T23:11:10.276568872Z" level=info msg="StopContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" returns successfully" Jul 6 23:11:10.278395 containerd[1983]: time="2025-07-06T23:11:10.277954476Z" level=info msg="StopPodSandbox for \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\"" Jul 6 23:11:10.278395 containerd[1983]: time="2025-07-06T23:11:10.278030136Z" level=info msg="Container to stop \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.283310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba-shm.mount: Deactivated successfully. Jul 6 23:11:10.298899 systemd[1]: cri-containerd-59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba.scope: Deactivated successfully. Jul 6 23:11:10.318493 containerd[1983]: time="2025-07-06T23:11:10.318408828Z" level=info msg="StopContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" returns successfully" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319812036Z" level=info msg="StopPodSandbox for \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\"" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319900860Z" level=info msg="Container to stop \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319929108Z" level=info msg="Container to stop \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319951632Z" level=info msg="Container to stop \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319971672Z" level=info msg="Container to stop \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.320237 containerd[1983]: time="2025-07-06T23:11:10.319994016Z" level=info msg="Container to stop \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:10.325295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613-shm.mount: Deactivated successfully. Jul 6 23:11:10.337835 systemd[1]: cri-containerd-d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613.scope: Deactivated successfully. Jul 6 23:11:10.377197 containerd[1983]: time="2025-07-06T23:11:10.376551996Z" level=info msg="shim disconnected" id=59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba namespace=k8s.io Jul 6 23:11:10.379445 containerd[1983]: time="2025-07-06T23:11:10.378992364Z" level=warning msg="cleaning up after shim disconnected" id=59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba namespace=k8s.io Jul 6 23:11:10.379737 containerd[1983]: time="2025-07-06T23:11:10.379699596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:10.401843 containerd[1983]: time="2025-07-06T23:11:10.401757168Z" level=info msg="shim disconnected" id=d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613 namespace=k8s.io Jul 6 23:11:10.401843 containerd[1983]: time="2025-07-06T23:11:10.401840232Z" level=warning msg="cleaning up after shim disconnected" id=d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613 namespace=k8s.io Jul 6 23:11:10.402318 containerd[1983]: time="2025-07-06T23:11:10.401863488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:10.421001 containerd[1983]: time="2025-07-06T23:11:10.420804588Z" level=info msg="TearDown network for sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" successfully" Jul 6 23:11:10.421001 containerd[1983]: time="2025-07-06T23:11:10.420853704Z" level=info msg="StopPodSandbox for \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" returns successfully" Jul 6 23:11:10.439932 containerd[1983]: time="2025-07-06T23:11:10.439743192Z" level=info msg="TearDown network for sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" successfully" Jul 6 23:11:10.439932 containerd[1983]: time="2025-07-06T23:11:10.439817976Z" level=info msg="StopPodSandbox for \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" returns successfully" Jul 6 23:11:10.551609 kubelet[3235]: I0706 23:11:10.551542 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cni-path\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551617 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-etc-cni-netd\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551653 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-xtables-lock\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551688 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-cgroup\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551737 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfdtc\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-kube-api-access-vfdtc\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551777 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-kube-api-access-cvh66\") pod \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\" (UID: \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\") " Jul 6 23:11:10.552900 kubelet[3235]: I0706 23:11:10.551812 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-hostproc\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.551862 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-config-path\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.551900 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-net\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.551940 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-cilium-config-path\") pod \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\" (UID: \"7f72c556-8d05-4ec2-a1e6-c866190ea1d6\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.551978 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-hubble-tls\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.552011 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-run\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.553234 kubelet[3235]: I0706 23:11:10.552042 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-bpf-maps\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.554681 kubelet[3235]: I0706 23:11:10.552075 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-kernel\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.554681 kubelet[3235]: I0706 23:11:10.552111 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-lib-modules\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.554681 kubelet[3235]: I0706 23:11:10.552147 3235 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets\") pod \"afd80e14-49fb-453c-b48a-d91871c2898b\" (UID: \"afd80e14-49fb-453c-b48a-d91871c2898b\") " Jul 6 23:11:10.555388 kubelet[3235]: I0706 23:11:10.555334 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cni-path" (OuterVolumeSpecName: "cni-path") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.555632 kubelet[3235]: I0706 23:11:10.555602 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.556065 kubelet[3235]: I0706 23:11:10.556031 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.556389 kubelet[3235]: I0706 23:11:10.556355 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.558846 kubelet[3235]: I0706 23:11:10.558778 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.558999 kubelet[3235]: I0706 23:11:10.558862 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.558999 kubelet[3235]: I0706 23:11:10.558902 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.558999 kubelet[3235]: I0706 23:11:10.558941 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.560508 kubelet[3235]: I0706 23:11:10.559941 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-hostproc" (OuterVolumeSpecName: "hostproc") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.560508 kubelet[3235]: I0706 23:11:10.560277 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:10.563944 kubelet[3235]: I0706 23:11:10.563884 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-kube-api-access-vfdtc" (OuterVolumeSpecName: "kube-api-access-vfdtc") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "kube-api-access-vfdtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:10.566134 kubelet[3235]: I0706 23:11:10.566082 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:10.566671 kubelet[3235]: I0706 23:11:10.566235 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:11:10.567737 kubelet[3235]: I0706 23:11:10.567631 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-kube-api-access-cvh66" (OuterVolumeSpecName: "kube-api-access-cvh66") pod "7f72c556-8d05-4ec2-a1e6-c866190ea1d6" (UID: "7f72c556-8d05-4ec2-a1e6-c866190ea1d6"). InnerVolumeSpecName "kube-api-access-cvh66". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:10.569641 kubelet[3235]: I0706 23:11:10.569458 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f72c556-8d05-4ec2-a1e6-c866190ea1d6" (UID: "7f72c556-8d05-4ec2-a1e6-c866190ea1d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:11:10.571228 kubelet[3235]: I0706 23:11:10.571182 3235 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afd80e14-49fb-453c-b48a-d91871c2898b" (UID: "afd80e14-49fb-453c-b48a-d91871c2898b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653418 3235 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-hostproc\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653496 3235 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-config-path\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653530 3235 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-net\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653553 3235 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-cilium-config-path\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653574 3235 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-hubble-tls\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653596 3235 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-run\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653617 3235 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-bpf-maps\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654171 kubelet[3235]: I0706 23:11:10.653637 3235 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-host-proc-sys-kernel\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653659 3235 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-lib-modules\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653679 3235 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afd80e14-49fb-453c-b48a-d91871c2898b-clustermesh-secrets\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653701 3235 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cni-path\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653721 3235 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-etc-cni-netd\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653742 3235 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-xtables-lock\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653762 3235 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afd80e14-49fb-453c-b48a-d91871c2898b-cilium-cgroup\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653782 3235 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfdtc\" (UniqueName: \"kubernetes.io/projected/afd80e14-49fb-453c-b48a-d91871c2898b-kube-api-access-vfdtc\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:10.654795 kubelet[3235]: I0706 23:11:10.653802 3235 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/7f72c556-8d05-4ec2-a1e6-c866190ea1d6-kube-api-access-cvh66\") on node \"ip-172-31-22-108\" DevicePath \"\"" Jul 6 23:11:11.057065 kubelet[3235]: I0706 23:11:11.054952 3235 scope.go:117] "RemoveContainer" containerID="d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb" Jul 6 23:11:11.061234 containerd[1983]: time="2025-07-06T23:11:11.061169748Z" level=info msg="RemoveContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\"" Jul 6 23:11:11.071448 containerd[1983]: time="2025-07-06T23:11:11.071371860Z" level=info msg="RemoveContainer for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" returns successfully" Jul 6 23:11:11.074932 systemd[1]: Removed slice kubepods-besteffort-pod7f72c556_8d05_4ec2_a1e6_c866190ea1d6.slice - libcontainer container kubepods-besteffort-pod7f72c556_8d05_4ec2_a1e6_c866190ea1d6.slice. Jul 6 23:11:11.077295 kubelet[3235]: I0706 23:11:11.076889 3235 scope.go:117] "RemoveContainer" containerID="d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb" Jul 6 23:11:11.078671 containerd[1983]: time="2025-07-06T23:11:11.077942688Z" level=error msg="ContainerStatus for \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\": not found" Jul 6 23:11:11.080503 kubelet[3235]: E0706 23:11:11.080411 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\": not found" containerID="d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb" Jul 6 23:11:11.080692 kubelet[3235]: I0706 23:11:11.080514 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb"} err="failed to get container status \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7e0ad06ae236dfbe1fac755f94a2bb7a6b4426028ad06e1bacb0fc5ba4593cb\": not found" Jul 6 23:11:11.080801 kubelet[3235]: I0706 23:11:11.080691 3235 scope.go:117] "RemoveContainer" containerID="bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4" Jul 6 23:11:11.085611 systemd[1]: Removed slice kubepods-burstable-podafd80e14_49fb_453c_b48a_d91871c2898b.slice - libcontainer container kubepods-burstable-podafd80e14_49fb_453c_b48a_d91871c2898b.slice. Jul 6 23:11:11.085855 systemd[1]: kubepods-burstable-podafd80e14_49fb_453c_b48a_d91871c2898b.slice: Consumed 16.400s CPU time, 126M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:11:11.091397 containerd[1983]: time="2025-07-06T23:11:11.090846504Z" level=info msg="RemoveContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\"" Jul 6 23:11:11.100559 containerd[1983]: time="2025-07-06T23:11:11.100389636Z" level=info msg="RemoveContainer for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" returns successfully" Jul 6 23:11:11.103988 kubelet[3235]: I0706 23:11:11.101430 3235 scope.go:117] "RemoveContainer" containerID="a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b" Jul 6 23:11:11.107308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613-rootfs.mount: Deactivated successfully. Jul 6 23:11:11.107598 systemd[1]: var-lib-kubelet-pods-afd80e14\x2d49fb\x2d453c\x2db48a\x2dd91871c2898b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:11:11.108208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba-rootfs.mount: Deactivated successfully. Jul 6 23:11:11.108911 systemd[1]: var-lib-kubelet-pods-7f72c556\x2d8d05\x2d4ec2\x2da1e6\x2dc866190ea1d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvh66.mount: Deactivated successfully. Jul 6 23:11:11.109771 systemd[1]: var-lib-kubelet-pods-afd80e14\x2d49fb\x2d453c\x2db48a\x2dd91871c2898b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfdtc.mount: Deactivated successfully. Jul 6 23:11:11.109915 systemd[1]: var-lib-kubelet-pods-afd80e14\x2d49fb\x2d453c\x2db48a\x2dd91871c2898b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:11:11.115200 containerd[1983]: time="2025-07-06T23:11:11.111154644Z" level=info msg="RemoveContainer for \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\"" Jul 6 23:11:11.122880 containerd[1983]: time="2025-07-06T23:11:11.122819088Z" level=info msg="RemoveContainer for \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\" returns successfully" Jul 6 23:11:11.123978 kubelet[3235]: I0706 23:11:11.123145 3235 scope.go:117] "RemoveContainer" containerID="fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d" Jul 6 23:11:11.125204 containerd[1983]: time="2025-07-06T23:11:11.124977144Z" level=info msg="RemoveContainer for \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\"" Jul 6 23:11:11.155133 containerd[1983]: time="2025-07-06T23:11:11.155005104Z" level=info msg="RemoveContainer for \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\" returns successfully" Jul 6 23:11:11.156096 kubelet[3235]: I0706 23:11:11.155564 3235 scope.go:117] "RemoveContainer" containerID="e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42" Jul 6 23:11:11.159024 containerd[1983]: time="2025-07-06T23:11:11.158979036Z" level=info msg="RemoveContainer for \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\"" Jul 6 23:11:11.167185 containerd[1983]: time="2025-07-06T23:11:11.167051652Z" level=info msg="RemoveContainer for \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\" returns successfully" Jul 6 23:11:11.167569 kubelet[3235]: I0706 23:11:11.167510 3235 scope.go:117] "RemoveContainer" containerID="7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9" Jul 6 23:11:11.172282 containerd[1983]: time="2025-07-06T23:11:11.171981648Z" level=info msg="RemoveContainer for \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\"" Jul 6 23:11:11.193156 containerd[1983]: time="2025-07-06T23:11:11.193090500Z" level=info msg="RemoveContainer for \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\" returns successfully" Jul 6 23:11:11.193816 kubelet[3235]: I0706 23:11:11.193507 3235 scope.go:117] "RemoveContainer" containerID="bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4" Jul 6 23:11:11.194423 containerd[1983]: time="2025-07-06T23:11:11.194322288Z" level=error msg="ContainerStatus for \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\": not found" Jul 6 23:11:11.194894 kubelet[3235]: E0706 23:11:11.194801 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\": not found" containerID="bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4" Jul 6 23:11:11.194983 kubelet[3235]: I0706 23:11:11.194880 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4"} err="failed to get container status \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf9ff910607b97cc7c95f7e48f4629450c84591ebbe84971b8bee162ecdb5dc4\": not found" Jul 6 23:11:11.194983 kubelet[3235]: I0706 23:11:11.194950 3235 scope.go:117] "RemoveContainer" containerID="a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b" Jul 6 23:11:11.195493 containerd[1983]: time="2025-07-06T23:11:11.195398484Z" level=error msg="ContainerStatus for \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\": not found" Jul 6 23:11:11.195985 kubelet[3235]: E0706 23:11:11.195925 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\": not found" containerID="a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b" Jul 6 23:11:11.196088 kubelet[3235]: I0706 23:11:11.196009 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b"} err="failed to get container status \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2274c42e84bbb51fb74d64bfe0c04d064702a2de775a817839afc38c9b02b\": not found" Jul 6 23:11:11.196088 kubelet[3235]: I0706 23:11:11.196080 3235 scope.go:117] "RemoveContainer" containerID="fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d" Jul 6 23:11:11.196480 containerd[1983]: time="2025-07-06T23:11:11.196416948Z" level=error msg="ContainerStatus for \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\": not found" Jul 6 23:11:11.196728 kubelet[3235]: E0706 23:11:11.196685 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\": not found" containerID="fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d" Jul 6 23:11:11.196814 kubelet[3235]: I0706 23:11:11.196739 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d"} err="failed to get container status \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc943c1bed416cd06347e31c651f33579f1143045be13c2aad0829b3df83783d\": not found" Jul 6 23:11:11.196814 kubelet[3235]: I0706 23:11:11.196771 3235 scope.go:117] "RemoveContainer" containerID="e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42" Jul 6 23:11:11.197337 containerd[1983]: time="2025-07-06T23:11:11.197136708Z" level=error msg="ContainerStatus for \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\": not found" Jul 6 23:11:11.197484 kubelet[3235]: E0706 23:11:11.197414 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\": not found" containerID="e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42" Jul 6 23:11:11.197548 kubelet[3235]: I0706 23:11:11.197495 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42"} err="failed to get container status \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\": rpc error: code = NotFound desc = an error occurred when try to find container \"e77ef9d3a65287eaf318ad02df3906859a8e521c68ee88c81c6315878b922a42\": not found" Jul 6 23:11:11.197548 kubelet[3235]: I0706 23:11:11.197530 3235 scope.go:117] "RemoveContainer" containerID="7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9" Jul 6 23:11:11.197910 containerd[1983]: time="2025-07-06T23:11:11.197858532Z" level=error msg="ContainerStatus for \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\": not found" Jul 6 23:11:11.198103 kubelet[3235]: E0706 23:11:11.198064 3235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\": not found" containerID="7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9" Jul 6 23:11:11.198203 kubelet[3235]: I0706 23:11:11.198113 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9"} err="failed to get container status \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7040fceadb639f4074f0d26f0cc32fc02644313d611c4a920fc2eca4139ccaf9\": not found" Jul 6 23:11:11.562506 kubelet[3235]: I0706 23:11:11.561328 3235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f72c556-8d05-4ec2-a1e6-c866190ea1d6" path="/var/lib/kubelet/pods/7f72c556-8d05-4ec2-a1e6-c866190ea1d6/volumes" Jul 6 23:11:11.562506 kubelet[3235]: I0706 23:11:11.562342 3235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd80e14-49fb-453c-b48a-d91871c2898b" path="/var/lib/kubelet/pods/afd80e14-49fb-453c-b48a-d91871c2898b/volumes" Jul 6 23:11:12.021534 sshd[5155]: Connection closed by 147.75.109.163 port 33662 Jul 6 23:11:12.022581 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:12.028156 systemd[1]: sshd@25-172.31.22.108:22-147.75.109.163:33662.service: Deactivated successfully. Jul 6 23:11:12.032833 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:11:12.033446 systemd[1]: session-26.scope: Consumed 1.694s CPU time, 23.6M memory peak. Jul 6 23:11:12.037441 systemd-logind[1948]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:11:12.039352 systemd-logind[1948]: Removed session 26. Jul 6 23:11:12.062041 systemd[1]: Started sshd@26-172.31.22.108:22-147.75.109.163:33668.service - OpenSSH per-connection server daemon (147.75.109.163:33668). Jul 6 23:11:12.252670 sshd[5320]: Accepted publickey for core from 147.75.109.163 port 33668 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:12.255066 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:12.263996 systemd-logind[1948]: New session 27 of user core. Jul 6 23:11:12.269774 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:11:12.757137 ntpd[1942]: Deleting interface #11 lxc_health, fe80::1c1e:5fff:fe10:961%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jul 6 23:11:12.757969 ntpd[1942]: 6 Jul 23:11:12 ntpd[1942]: Deleting interface #11 lxc_health, fe80::1c1e:5fff:fe10:961%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jul 6 23:11:13.747018 kubelet[3235]: E0706 23:11:13.746953 3235 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:11:14.231554 sshd[5322]: Connection closed by 147.75.109.163 port 33668 Jul 6 23:11:14.232608 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:14.244732 systemd[1]: sshd@26-172.31.22.108:22-147.75.109.163:33668.service: Deactivated successfully. Jul 6 23:11:14.251925 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:11:14.253814 systemd[1]: session-27.scope: Consumed 1.752s CPU time, 25.7M memory peak. Jul 6 23:11:14.257512 systemd-logind[1948]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:11:14.283032 systemd[1]: Started sshd@27-172.31.22.108:22-147.75.109.163:33678.service - OpenSSH per-connection server daemon (147.75.109.163:33678). Jul 6 23:11:14.287616 systemd-logind[1948]: Removed session 27. Jul 6 23:11:14.327704 kubelet[3235]: I0706 23:11:14.327612 3235 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f72c556-8d05-4ec2-a1e6-c866190ea1d6" containerName="cilium-operator" Jul 6 23:11:14.327704 kubelet[3235]: I0706 23:11:14.327664 3235 memory_manager.go:355] "RemoveStaleState removing state" podUID="afd80e14-49fb-453c-b48a-d91871c2898b" containerName="cilium-agent" Jul 6 23:11:14.361571 systemd[1]: Created slice kubepods-burstable-pode3be1d45_d70a_4466_9cdb_f1de99e5404e.slice - libcontainer container kubepods-burstable-pode3be1d45_d70a_4466_9cdb_f1de99e5404e.slice. Jul 6 23:11:14.477232 kubelet[3235]: I0706 23:11:14.477127 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-host-proc-sys-net\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477232 kubelet[3235]: I0706 23:11:14.477199 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3be1d45-d70a-4466-9cdb-f1de99e5404e-hubble-tls\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477241 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-cni-path\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477279 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf9q\" (UniqueName: \"kubernetes.io/projected/e3be1d45-d70a-4466-9cdb-f1de99e5404e-kube-api-access-scf9q\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477321 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-xtables-lock\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477367 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-etc-cni-netd\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477402 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-lib-modules\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477572 kubelet[3235]: I0706 23:11:14.477435 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3be1d45-d70a-4466-9cdb-f1de99e5404e-clustermesh-secrets\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477510 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3be1d45-d70a-4466-9cdb-f1de99e5404e-cilium-ipsec-secrets\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477549 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-hostproc\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477586 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-cilium-run\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477619 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-bpf-maps\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477656 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-cilium-cgroup\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.477858 kubelet[3235]: I0706 23:11:14.477697 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3be1d45-d70a-4466-9cdb-f1de99e5404e-cilium-config-path\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.478145 kubelet[3235]: I0706 23:11:14.477730 3235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3be1d45-d70a-4466-9cdb-f1de99e5404e-host-proc-sys-kernel\") pod \"cilium-zrmzm\" (UID: \"e3be1d45-d70a-4466-9cdb-f1de99e5404e\") " pod="kube-system/cilium-zrmzm" Jul 6 23:11:14.539450 sshd[5331]: Accepted publickey for core from 147.75.109.163 port 33678 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:14.541910 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:14.552079 systemd-logind[1948]: New session 28 of user core. Jul 6 23:11:14.558761 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:11:14.674020 containerd[1983]: time="2025-07-06T23:11:14.673896293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrmzm,Uid:e3be1d45-d70a-4466-9cdb-f1de99e5404e,Namespace:kube-system,Attempt:0,}" Jul 6 23:11:14.680404 sshd[5334]: Connection closed by 147.75.109.163 port 33678 Jul 6 23:11:14.681296 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:14.688981 systemd[1]: sshd@27-172.31.22.108:22-147.75.109.163:33678.service: Deactivated successfully. Jul 6 23:11:14.706080 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:11:14.709860 systemd-logind[1948]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:11:14.734033 containerd[1983]: time="2025-07-06T23:11:14.732212874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:11:14.734033 containerd[1983]: time="2025-07-06T23:11:14.733532262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:11:14.734033 containerd[1983]: time="2025-07-06T23:11:14.733565250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:14.734033 containerd[1983]: time="2025-07-06T23:11:14.733727142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:14.735022 systemd[1]: Started sshd@28-172.31.22.108:22-147.75.109.163:33694.service - OpenSSH per-connection server daemon (147.75.109.163:33694). Jul 6 23:11:14.737570 systemd-logind[1948]: Removed session 28. Jul 6 23:11:14.773763 systemd[1]: Started cri-containerd-e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36.scope - libcontainer container e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36. Jul 6 23:11:14.829934 containerd[1983]: time="2025-07-06T23:11:14.829758234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrmzm,Uid:e3be1d45-d70a-4466-9cdb-f1de99e5404e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\"" Jul 6 23:11:14.838762 containerd[1983]: time="2025-07-06T23:11:14.838675122Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:11:14.862789 containerd[1983]: time="2025-07-06T23:11:14.862708722Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad\"" Jul 6 23:11:14.864860 containerd[1983]: time="2025-07-06T23:11:14.864788802Z" level=info msg="StartContainer for \"b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad\"" Jul 6 23:11:14.911808 systemd[1]: Started cri-containerd-b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad.scope - libcontainer container b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad. Jul 6 23:11:14.951760 sshd[5357]: Accepted publickey for core from 147.75.109.163 port 33694 ssh2: RSA SHA256:mNHXpHG4Fyj2vy8ZuaqRx+rDBdQCP0CFmBMNAmRcq74 Jul 6 23:11:14.953739 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:14.964575 containerd[1983]: time="2025-07-06T23:11:14.964509643Z" level=info msg="StartContainer for \"b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad\" returns successfully" Jul 6 23:11:14.967270 systemd-logind[1948]: New session 29 of user core. Jul 6 23:11:14.975813 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:11:14.994070 systemd[1]: cri-containerd-b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad.scope: Deactivated successfully. Jul 6 23:11:15.052906 containerd[1983]: time="2025-07-06T23:11:15.052807515Z" level=info msg="shim disconnected" id=b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad namespace=k8s.io Jul 6 23:11:15.052906 containerd[1983]: time="2025-07-06T23:11:15.052885407Z" level=warning msg="cleaning up after shim disconnected" id=b76e4b69fac07ec1619b40b7492d1407953445f2b268951515e5d28c23f425ad namespace=k8s.io Jul 6 23:11:15.052906 containerd[1983]: time="2025-07-06T23:11:15.052907583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:15.097287 containerd[1983]: time="2025-07-06T23:11:15.097043524Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:11:15.144667 containerd[1983]: time="2025-07-06T23:11:15.144592828Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb\"" Jul 6 23:11:15.153533 containerd[1983]: time="2025-07-06T23:11:15.152700400Z" level=info msg="StartContainer for \"af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb\"" Jul 6 23:11:15.251442 systemd[1]: Started cri-containerd-af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb.scope - libcontainer container af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb. Jul 6 23:11:15.310102 containerd[1983]: time="2025-07-06T23:11:15.310033421Z" level=info msg="StartContainer for \"af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb\" returns successfully" Jul 6 23:11:15.326657 systemd[1]: cri-containerd-af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb.scope: Deactivated successfully. Jul 6 23:11:15.376625 containerd[1983]: time="2025-07-06T23:11:15.376073237Z" level=info msg="shim disconnected" id=af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb namespace=k8s.io Jul 6 23:11:15.376625 containerd[1983]: time="2025-07-06T23:11:15.376182581Z" level=warning msg="cleaning up after shim disconnected" id=af57b37c0fbb19b4e8da94313fc57d386ce76f9df3e52d6f92e27325a9aab5cb namespace=k8s.io Jul 6 23:11:15.376625 containerd[1983]: time="2025-07-06T23:11:15.376202033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:16.100992 containerd[1983]: time="2025-07-06T23:11:16.100853945Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:11:16.150101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170946446.mount: Deactivated successfully. Jul 6 23:11:16.153419 containerd[1983]: time="2025-07-06T23:11:16.153211385Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46\"" Jul 6 23:11:16.155401 containerd[1983]: time="2025-07-06T23:11:16.154729301Z" level=info msg="StartContainer for \"ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46\"" Jul 6 23:11:16.155577 kubelet[3235]: I0706 23:11:16.155326 3235 setters.go:602] "Node became not ready" node="ip-172-31-22-108" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:11:16Z","lastTransitionTime":"2025-07-06T23:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:11:16.238794 systemd[1]: Started cri-containerd-ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46.scope - libcontainer container ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46. Jul 6 23:11:16.295027 containerd[1983]: time="2025-07-06T23:11:16.294873774Z" level=info msg="StartContainer for \"ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46\" returns successfully" Jul 6 23:11:16.299212 systemd[1]: cri-containerd-ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46.scope: Deactivated successfully. Jul 6 23:11:16.351686 containerd[1983]: time="2025-07-06T23:11:16.351085950Z" level=info msg="shim disconnected" id=ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46 namespace=k8s.io Jul 6 23:11:16.351686 containerd[1983]: time="2025-07-06T23:11:16.351159642Z" level=warning msg="cleaning up after shim disconnected" id=ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46 namespace=k8s.io Jul 6 23:11:16.351686 containerd[1983]: time="2025-07-06T23:11:16.351178722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:16.595378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba7e0bb252537026d31ac199334d59de012552046eb085a7116bf66f6d043d46-rootfs.mount: Deactivated successfully. Jul 6 23:11:17.105381 containerd[1983]: time="2025-07-06T23:11:17.105311970Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:11:17.138824 containerd[1983]: time="2025-07-06T23:11:17.137794326Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53\"" Jul 6 23:11:17.138824 containerd[1983]: time="2025-07-06T23:11:17.138744330Z" level=info msg="StartContainer for \"a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53\"" Jul 6 23:11:17.204832 systemd[1]: Started cri-containerd-a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53.scope - libcontainer container a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53. Jul 6 23:11:17.333449 containerd[1983]: time="2025-07-06T23:11:17.333378751Z" level=info msg="StartContainer for \"a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53\" returns successfully" Jul 6 23:11:17.340872 systemd[1]: cri-containerd-a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53.scope: Deactivated successfully. Jul 6 23:11:17.419058 containerd[1983]: time="2025-07-06T23:11:17.418327987Z" level=info msg="shim disconnected" id=a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53 namespace=k8s.io Jul 6 23:11:17.419058 containerd[1983]: time="2025-07-06T23:11:17.418406683Z" level=warning msg="cleaning up after shim disconnected" id=a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53 namespace=k8s.io Jul 6 23:11:17.419058 containerd[1983]: time="2025-07-06T23:11:17.418428775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:17.595311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a805291ca969157aeb2b8a0cbc1708dd5f6ae77dc53cf643d4fc6e750c35da53-rootfs.mount: Deactivated successfully. Jul 6 23:11:18.117036 containerd[1983]: time="2025-07-06T23:11:18.116957035Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:11:18.151521 containerd[1983]: time="2025-07-06T23:11:18.151296223Z" level=info msg="CreateContainer within sandbox \"e4b96b0915acb1ea9307edfdb756302b69c343d38c07f0b842161078e4c00c36\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eda48a33b16307b21f87fe47ef1015dd69c92316cf8c4ec7573d3c633bc96c59\"" Jul 6 23:11:18.152329 containerd[1983]: time="2025-07-06T23:11:18.152048899Z" level=info msg="StartContainer for \"eda48a33b16307b21f87fe47ef1015dd69c92316cf8c4ec7573d3c633bc96c59\"" Jul 6 23:11:18.242147 systemd[1]: Started cri-containerd-eda48a33b16307b21f87fe47ef1015dd69c92316cf8c4ec7573d3c633bc96c59.scope - libcontainer container eda48a33b16307b21f87fe47ef1015dd69c92316cf8c4ec7573d3c633bc96c59. Jul 6 23:11:18.321374 containerd[1983]: time="2025-07-06T23:11:18.321008396Z" level=info msg="StartContainer for \"eda48a33b16307b21f87fe47ef1015dd69c92316cf8c4ec7573d3c633bc96c59\" returns successfully" Jul 6 23:11:19.168566 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:11:21.869899 kubelet[3235]: E0706 23:11:21.869690 3235 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43502->127.0.0.1:45429: write tcp 127.0.0.1:43502->127.0.0.1:45429: write: broken pipe Jul 6 23:11:23.484829 (udev-worker)[6174]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:11:23.486436 (udev-worker)[6175]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:11:23.498447 containerd[1983]: time="2025-07-06T23:11:23.496897657Z" level=info msg="StopPodSandbox for \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\"" Jul 6 23:11:23.498447 containerd[1983]: time="2025-07-06T23:11:23.497033725Z" level=info msg="TearDown network for sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" successfully" Jul 6 23:11:23.498447 containerd[1983]: time="2025-07-06T23:11:23.497055541Z" level=info msg="StopPodSandbox for \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" returns successfully" Jul 6 23:11:23.500536 containerd[1983]: time="2025-07-06T23:11:23.498346345Z" level=info msg="RemovePodSandbox for \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\"" Jul 6 23:11:23.500536 containerd[1983]: time="2025-07-06T23:11:23.499190593Z" level=info msg="Forcibly stopping sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\"" Jul 6 23:11:23.500536 containerd[1983]: time="2025-07-06T23:11:23.499370149Z" level=info msg="TearDown network for sandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" successfully" Jul 6 23:11:23.522393 containerd[1983]: time="2025-07-06T23:11:23.519365761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:11:23.522393 containerd[1983]: time="2025-07-06T23:11:23.519959605Z" level=info msg="RemovePodSandbox \"d02107e3cc6ef522a23f4d073492aa2c37d59b2365eee8db60904083a13e0613\" returns successfully" Jul 6 23:11:23.522393 containerd[1983]: time="2025-07-06T23:11:23.521853061Z" level=info msg="StopPodSandbox for \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\"" Jul 6 23:11:23.522393 containerd[1983]: time="2025-07-06T23:11:23.521990533Z" level=info msg="TearDown network for sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" successfully" Jul 6 23:11:23.522393 containerd[1983]: time="2025-07-06T23:11:23.522011713Z" level=info msg="StopPodSandbox for \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" returns successfully" Jul 6 23:11:23.524385 containerd[1983]: time="2025-07-06T23:11:23.523092085Z" level=info msg="RemovePodSandbox for \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\"" Jul 6 23:11:23.524385 containerd[1983]: time="2025-07-06T23:11:23.523151893Z" level=info msg="Forcibly stopping sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\"" Jul 6 23:11:23.524385 containerd[1983]: time="2025-07-06T23:11:23.523249249Z" level=info msg="TearDown network for sandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" successfully" Jul 6 23:11:23.524936 systemd-networkd[1879]: lxc_health: Link UP Jul 6 23:11:23.543900 containerd[1983]: time="2025-07-06T23:11:23.540005102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:11:23.543900 containerd[1983]: time="2025-07-06T23:11:23.540101666Z" level=info msg="RemovePodSandbox \"59faf9dc7c65e9efba02cc5f637fce7deb49eb00630c98bac7e79d0d2aeeb6ba\" returns successfully" Jul 6 23:11:23.554871 systemd-networkd[1879]: lxc_health: Gained carrier Jul 6 23:11:24.675870 systemd-networkd[1879]: lxc_health: Gained IPv6LL Jul 6 23:11:24.721247 kubelet[3235]: I0706 23:11:24.721148 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zrmzm" podStartSLOduration=10.721125267 podStartE2EDuration="10.721125267s" podCreationTimestamp="2025-07-06 23:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:11:19.166615088 +0000 UTC m=+115.936939074" watchObservedRunningTime="2025-07-06 23:11:24.721125267 +0000 UTC m=+121.491449217" Jul 6 23:11:26.757407 ntpd[1942]: Listen normally on 14 lxc_health [fe80::84f:b2ff:fee8:ec53%14]:123 Jul 6 23:11:26.757954 ntpd[1942]: 6 Jul 23:11:26 ntpd[1942]: Listen normally on 14 lxc_health [fe80::84f:b2ff:fee8:ec53%14]:123 Jul 6 23:11:31.122054 sshd[5421]: Connection closed by 147.75.109.163 port 33694 Jul 6 23:11:31.124822 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:31.131570 systemd-logind[1948]: Session 29 logged out. Waiting for processes to exit. Jul 6 23:11:31.133280 systemd[1]: sshd@28-172.31.22.108:22-147.75.109.163:33694.service: Deactivated successfully. Jul 6 23:11:31.141020 systemd[1]: session-29.scope: Deactivated successfully. Jul 6 23:11:31.146403 systemd-logind[1948]: Removed session 29. Jul 6 23:11:45.528434 systemd[1]: cri-containerd-4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731.scope: Deactivated successfully. Jul 6 23:11:45.529281 systemd[1]: cri-containerd-4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731.scope: Consumed 5.389s CPU time, 57.7M memory peak. Jul 6 23:11:45.578005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731-rootfs.mount: Deactivated successfully. Jul 6 23:11:45.601258 containerd[1983]: time="2025-07-06T23:11:45.600981299Z" level=info msg="shim disconnected" id=4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731 namespace=k8s.io Jul 6 23:11:45.601258 containerd[1983]: time="2025-07-06T23:11:45.601070687Z" level=warning msg="cleaning up after shim disconnected" id=4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731 namespace=k8s.io Jul 6 23:11:45.601258 containerd[1983]: time="2025-07-06T23:11:45.601090163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:45.893932 kubelet[3235]: E0706 23:11:45.893693 3235 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:11:46.198330 kubelet[3235]: I0706 23:11:46.198281 3235 scope.go:117] "RemoveContainer" containerID="4b5edf8d8f88fb15631f41011d31ac47777b99d88e9fa5e8f606a25c7fcb3731" Jul 6 23:11:46.202567 containerd[1983]: time="2025-07-06T23:11:46.202498834Z" level=info msg="CreateContainer within sandbox \"b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:11:46.231730 containerd[1983]: time="2025-07-06T23:11:46.231640870Z" level=info msg="CreateContainer within sandbox \"b1eb1430d052756944c41c3551b482ec69371e485074ce55500d6f45a1ff68fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6fc4616fb74e987e88a3e6892c70b6585ffcec0998d4cc755167ef8991a0112a\"" Jul 6 23:11:46.232370 containerd[1983]: time="2025-07-06T23:11:46.232325674Z" level=info msg="StartContainer for \"6fc4616fb74e987e88a3e6892c70b6585ffcec0998d4cc755167ef8991a0112a\"" Jul 6 23:11:46.287814 systemd[1]: Started cri-containerd-6fc4616fb74e987e88a3e6892c70b6585ffcec0998d4cc755167ef8991a0112a.scope - libcontainer container 6fc4616fb74e987e88a3e6892c70b6585ffcec0998d4cc755167ef8991a0112a. Jul 6 23:11:46.363418 containerd[1983]: time="2025-07-06T23:11:46.363343655Z" level=info msg="StartContainer for \"6fc4616fb74e987e88a3e6892c70b6585ffcec0998d4cc755167ef8991a0112a\" returns successfully" Jul 6 23:11:49.488049 systemd[1]: cri-containerd-cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14.scope: Deactivated successfully. Jul 6 23:11:49.489195 systemd[1]: cri-containerd-cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14.scope: Consumed 5.445s CPU time, 23.1M memory peak. Jul 6 23:11:49.532247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14-rootfs.mount: Deactivated successfully. Jul 6 23:11:49.548637 containerd[1983]: time="2025-07-06T23:11:49.548552703Z" level=info msg="shim disconnected" id=cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14 namespace=k8s.io Jul 6 23:11:49.548637 containerd[1983]: time="2025-07-06T23:11:49.548632083Z" level=warning msg="cleaning up after shim disconnected" id=cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14 namespace=k8s.io Jul 6 23:11:49.549831 containerd[1983]: time="2025-07-06T23:11:49.548653071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:50.215923 kubelet[3235]: I0706 23:11:50.215805 3235 scope.go:117] "RemoveContainer" containerID="cf34d138447a8e42682709670186e4825a977d2d03ab10ef079cfc97dd001f14" Jul 6 23:11:50.219372 containerd[1983]: time="2025-07-06T23:11:50.219299018Z" level=info msg="CreateContainer within sandbox \"83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:11:50.249229 containerd[1983]: time="2025-07-06T23:11:50.249160286Z" level=info msg="CreateContainer within sandbox \"83c5b7bf614d5dc8ff6bc0c8fe22f3528cdf0736f4bb0f5f9d2d260db6a7b6ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c1fb65823dcd5ef04dbafde55dc2d2e47538649322eba9580e7bbc8ecf7c9c4d\"" Jul 6 23:11:50.250691 containerd[1983]: time="2025-07-06T23:11:50.250119506Z" level=info msg="StartContainer for \"c1fb65823dcd5ef04dbafde55dc2d2e47538649322eba9580e7bbc8ecf7c9c4d\"" Jul 6 23:11:50.311829 systemd[1]: Started cri-containerd-c1fb65823dcd5ef04dbafde55dc2d2e47538649322eba9580e7bbc8ecf7c9c4d.scope - libcontainer container c1fb65823dcd5ef04dbafde55dc2d2e47538649322eba9580e7bbc8ecf7c9c4d. Jul 6 23:11:50.385631 containerd[1983]: time="2025-07-06T23:11:50.385283139Z" level=info msg="StartContainer for \"c1fb65823dcd5ef04dbafde55dc2d2e47538649322eba9580e7bbc8ecf7c9c4d\" returns successfully" Jul 6 23:11:55.894793 kubelet[3235]: E0706 23:11:55.894459 3235 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:12:05.896278 kubelet[3235]: E0706 23:12:05.895536 3235 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"