Aug 13 00:17:57.248939 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 13 00:17:57.248983 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:17:57.249008 kernel: KASLR disabled due to lack of seed Aug 13 00:17:57.249025 kernel: efi: EFI v2.7 by EDK II Aug 13 00:17:57.249041 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Aug 13 00:17:57.249057 kernel: ACPI: Early table checksum verification disabled Aug 13 00:17:57.249075 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 13 00:17:57.249090 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:17:57.249106 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:17:57.249122 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 13 00:17:57.249143 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:17:57.249160 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 13 00:17:57.249175 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 13 00:17:57.249191 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 13 00:17:57.249210 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:17:57.249230 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 13 00:17:57.249248 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 13 00:17:57.249265 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 13 00:17:57.249281 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 13 00:17:57.249298 kernel: printk: bootconsole [uart0] enabled Aug 13 00:17:57.249314 kernel: NUMA: Failed to initialise from firmware Aug 13 00:17:57.249331 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:17:57.249421 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 13 00:17:57.249444 kernel: Zone ranges: Aug 13 00:17:57.249462 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:17:57.249479 kernel: DMA32 empty Aug 13 00:17:57.249502 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 13 00:17:57.249520 kernel: Movable zone start for each node Aug 13 00:17:57.249536 kernel: Early memory node ranges Aug 13 00:17:57.249553 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 13 00:17:57.249569 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 13 00:17:57.249586 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 13 00:17:57.249602 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 13 00:17:57.249680 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 13 00:17:57.249735 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 13 00:17:57.250118 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 13 00:17:57.251220 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 13 00:17:57.251289 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:17:57.251383 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 13 00:17:57.251402 kernel: psci: probing for conduit method from ACPI. Aug 13 00:17:57.251426 kernel: psci: PSCIv1.0 detected in firmware. Aug 13 00:17:57.251444 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:17:57.251462 kernel: psci: Trusted OS migration not required Aug 13 00:17:57.251484 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:17:57.251502 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 13 00:17:57.251520 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:17:57.251537 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:17:57.251555 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:17:57.251572 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:17:57.251590 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:17:57.251607 kernel: CPU features: detected: Spectre-v2 Aug 13 00:17:57.251625 kernel: CPU features: detected: Spectre-v3a Aug 13 00:17:57.251642 kernel: CPU features: detected: Spectre-BHB Aug 13 00:17:57.251659 kernel: CPU features: detected: ARM erratum 1742098 Aug 13 00:17:57.251681 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 13 00:17:57.251699 kernel: alternatives: applying boot alternatives Aug 13 00:17:57.251719 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:17:57.251738 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:17:57.251756 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:17:57.251774 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:17:57.251791 kernel: Fallback order for Node 0: 0 Aug 13 00:17:57.251809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 13 00:17:57.251826 kernel: Policy zone: Normal Aug 13 00:17:57.251843 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:17:57.251861 kernel: software IO TLB: area num 2. Aug 13 00:17:57.251883 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 13 00:17:57.251902 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Aug 13 00:17:57.251919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:17:57.251937 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:17:57.251955 kernel: rcu: RCU event tracing is enabled. Aug 13 00:17:57.251974 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:17:57.251991 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:17:57.252009 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:17:57.252026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:17:57.252044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:17:57.252062 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:17:57.252083 kernel: GICv3: 96 SPIs implemented Aug 13 00:17:57.252101 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:17:57.252119 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:17:57.252136 kernel: GICv3: GICv3 features: 16 PPIs Aug 13 00:17:57.252153 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 13 00:17:57.252171 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 13 00:17:57.252188 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:17:57.252207 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:17:57.252224 kernel: GICv3: using LPI property table @0x00000004000d0000 Aug 13 00:17:57.252242 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 13 00:17:57.252259 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Aug 13 00:17:57.252277 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:17:57.252299 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 13 00:17:57.252317 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 13 00:17:57.252335 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 13 00:17:57.252370 kernel: Console: colour dummy device 80x25 Aug 13 00:17:57.252391 kernel: printk: console [tty1] enabled Aug 13 00:17:57.252409 kernel: ACPI: Core revision 20230628 Aug 13 00:17:57.252428 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 13 00:17:57.252447 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:17:57.252465 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:17:57.252489 kernel: landlock: Up and running. Aug 13 00:17:57.252507 kernel: SELinux: Initializing. Aug 13 00:17:57.252525 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:17:57.252543 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:17:57.252561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:17:57.252579 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:17:57.252597 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:17:57.252615 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:17:57.252633 kernel: Platform MSI: ITS@0x10080000 domain created Aug 13 00:17:57.252655 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 13 00:17:57.252673 kernel: Remapping and enabling EFI services. Aug 13 00:17:57.252691 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:17:57.252709 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:17:57.252727 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 13 00:17:57.252746 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Aug 13 00:17:57.252764 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 13 00:17:57.252781 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:17:57.252799 kernel: SMP: Total of 2 processors activated. Aug 13 00:17:57.252817 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:17:57.252839 kernel: CPU features: detected: 32-bit EL1 Support Aug 13 00:17:57.252857 kernel: CPU features: detected: CRC32 instructions Aug 13 00:17:57.252886 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:17:57.252909 kernel: alternatives: applying system-wide alternatives Aug 13 00:17:57.252928 kernel: devtmpfs: initialized Aug 13 00:17:57.252946 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:17:57.252965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:17:57.252984 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:17:57.253003 kernel: SMBIOS 3.0.0 present. Aug 13 00:17:57.253026 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 13 00:17:57.253045 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:17:57.253064 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:17:57.253083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:17:57.253102 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:17:57.253120 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:17:57.253139 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Aug 13 00:17:57.253162 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:17:57.253181 kernel: cpuidle: using governor menu Aug 13 00:17:57.253199 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:17:57.253218 kernel: ASID allocator initialised with 65536 entries Aug 13 00:17:57.253236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:17:57.253255 kernel: Serial: AMBA PL011 UART driver Aug 13 00:17:57.253273 kernel: Modules: 17488 pages in range for non-PLT usage Aug 13 00:17:57.253292 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:17:57.253311 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:17:57.253334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:17:57.255148 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:17:57.255174 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:17:57.255193 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:17:57.255212 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:17:57.255230 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:17:57.255249 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:17:57.255268 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:17:57.255286 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:17:57.255314 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:17:57.255333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:17:57.255385 kernel: ACPI: Interpreter enabled Aug 13 00:17:57.255406 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:17:57.255425 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:17:57.255444 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 13 00:17:57.255768 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:17:57.256017 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:17:57.256230 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:17:57.256492 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 13 00:17:57.256704 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 13 00:17:57.256753 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 13 00:17:57.256775 kernel: acpiphp: Slot [1] registered Aug 13 00:17:57.256794 kernel: acpiphp: Slot [2] registered Aug 13 00:17:57.256812 kernel: acpiphp: Slot [3] registered Aug 13 00:17:57.256831 kernel: acpiphp: Slot [4] registered Aug 13 00:17:57.256856 kernel: acpiphp: Slot [5] registered Aug 13 00:17:57.256875 kernel: acpiphp: Slot [6] registered Aug 13 00:17:57.256894 kernel: acpiphp: Slot [7] registered Aug 13 00:17:57.257932 kernel: acpiphp: Slot [8] registered Aug 13 00:17:57.257969 kernel: acpiphp: Slot [9] registered Aug 13 00:17:57.257988 kernel: acpiphp: Slot [10] registered Aug 13 00:17:57.258009 kernel: acpiphp: Slot [11] registered Aug 13 00:17:57.258028 kernel: acpiphp: Slot [12] registered Aug 13 00:17:57.258048 kernel: acpiphp: Slot [13] registered Aug 13 00:17:57.258067 kernel: acpiphp: Slot [14] registered Aug 13 00:17:57.258097 kernel: acpiphp: Slot [15] registered Aug 13 00:17:57.258117 kernel: acpiphp: Slot [16] registered Aug 13 00:17:57.258135 kernel: acpiphp: Slot [17] registered Aug 13 00:17:57.258154 kernel: acpiphp: Slot [18] registered Aug 13 00:17:57.258172 kernel: acpiphp: Slot [19] registered Aug 13 00:17:57.258191 kernel: acpiphp: Slot [20] registered Aug 13 00:17:57.258210 kernel: acpiphp: Slot [21] registered Aug 13 00:17:57.258228 kernel: acpiphp: Slot [22] registered Aug 13 00:17:57.258247 kernel: acpiphp: Slot [23] registered Aug 13 00:17:57.258270 kernel: acpiphp: Slot [24] registered Aug 13 00:17:57.258290 kernel: acpiphp: Slot [25] registered Aug 13 00:17:57.258308 kernel: acpiphp: Slot [26] registered Aug 13 00:17:57.258327 kernel: acpiphp: Slot [27] registered Aug 13 00:17:57.258367 kernel: acpiphp: Slot [28] registered Aug 13 00:17:57.258392 kernel: acpiphp: Slot [29] registered Aug 13 00:17:57.258412 kernel: acpiphp: Slot [30] registered Aug 13 00:17:57.258432 kernel: acpiphp: Slot [31] registered Aug 13 00:17:57.258451 kernel: PCI host bridge to bus 0000:00 Aug 13 00:17:57.258718 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 13 00:17:57.258926 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:17:57.259122 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 13 00:17:57.259316 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 13 00:17:57.259952 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 13 00:17:57.260192 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 13 00:17:57.260454 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 13 00:17:57.260710 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:17:57.260947 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 13 00:17:57.261184 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:17:57.261512 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:17:57.261780 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 13 00:17:57.261996 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 13 00:17:57.262217 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 13 00:17:57.263597 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:17:57.263895 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 13 00:17:57.264103 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 13 00:17:57.264326 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 13 00:17:57.265836 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 13 00:17:57.266123 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 13 00:17:57.267673 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 13 00:17:57.267908 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:17:57.268110 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 13 00:17:57.268136 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:17:57.268157 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:17:57.268176 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:17:57.268196 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:17:57.268215 kernel: iommu: Default domain type: Translated Aug 13 00:17:57.268233 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:17:57.268258 kernel: efivars: Registered efivars operations Aug 13 00:17:57.268277 kernel: vgaarb: loaded Aug 13 00:17:57.268296 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:17:57.268315 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:17:57.268334 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:17:57.268378 kernel: pnp: PnP ACPI init Aug 13 00:17:57.268616 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 13 00:17:57.268644 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:17:57.268669 kernel: NET: Registered PF_INET protocol family Aug 13 00:17:57.268689 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:17:57.268708 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:17:57.268727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:17:57.268746 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:17:57.268765 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:17:57.268784 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:17:57.268803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:17:57.268821 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:17:57.268845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:17:57.268864 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:17:57.268882 kernel: kvm [1]: HYP mode not available Aug 13 00:17:57.268901 kernel: Initialise system trusted keyrings Aug 13 00:17:57.268920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:17:57.268939 kernel: Key type asymmetric registered Aug 13 00:17:57.268957 kernel: Asymmetric key parser 'x509' registered Aug 13 00:17:57.268976 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:17:57.268995 kernel: io scheduler mq-deadline registered Aug 13 00:17:57.269018 kernel: io scheduler kyber registered Aug 13 00:17:57.269037 kernel: io scheduler bfq registered Aug 13 00:17:57.269275 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 13 00:17:57.269304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:17:57.269323 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:17:57.269343 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 13 00:17:57.276608 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:17:57.276631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:17:57.276663 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:17:57.276955 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 13 00:17:57.276983 kernel: printk: console [ttyS0] disabled Aug 13 00:17:57.277003 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 13 00:17:57.277023 kernel: printk: console [ttyS0] enabled Aug 13 00:17:57.277042 kernel: printk: bootconsole [uart0] disabled Aug 13 00:17:57.277061 kernel: thunder_xcv, ver 1.0 Aug 13 00:17:57.277079 kernel: thunder_bgx, ver 1.0 Aug 13 00:17:57.277098 kernel: nicpf, ver 1.0 Aug 13 00:17:57.277126 kernel: nicvf, ver 1.0 Aug 13 00:17:57.278444 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:17:57.278783 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:17:56 UTC (1755044276) Aug 13 00:17:57.278822 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:17:57.278846 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 13 00:17:57.278866 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:17:57.278896 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:17:57.278918 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:17:57.278958 kernel: Segment Routing with IPv6 Aug 13 00:17:57.278988 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:17:57.279012 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:17:57.279042 kernel: Key type dns_resolver registered Aug 13 00:17:57.279064 kernel: registered taskstats version 1 Aug 13 00:17:57.279095 kernel: Loading compiled-in X.509 certificates Aug 13 00:17:57.279117 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:17:57.279149 kernel: Key type .fscrypt registered Aug 13 00:17:57.279178 kernel: Key type fscrypt-provisioning registered Aug 13 00:17:57.279209 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:17:57.279243 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:17:57.279263 kernel: ima: No architecture policies found Aug 13 00:17:57.279297 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:17:57.279327 kernel: clk: Disabling unused clocks Aug 13 00:17:57.279929 kernel: Freeing unused kernel memory: 39424K Aug 13 00:17:57.279961 kernel: Run /init as init process Aug 13 00:17:57.279994 kernel: with arguments: Aug 13 00:17:57.280016 kernel: /init Aug 13 00:17:57.280049 kernel: with environment: Aug 13 00:17:57.280089 kernel: HOME=/ Aug 13 00:17:57.280110 kernel: TERM=linux Aug 13 00:17:57.280141 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:17:57.280166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:17:57.280203 systemd[1]: Detected virtualization amazon. Aug 13 00:17:57.280235 systemd[1]: Detected architecture arm64. Aug 13 00:17:57.280258 systemd[1]: Running in initrd. Aug 13 00:17:57.280296 systemd[1]: No hostname configured, using default hostname. Aug 13 00:17:57.280328 systemd[1]: Hostname set to . Aug 13 00:17:57.282384 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:17:57.282428 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:17:57.282454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:17:57.282491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:17:57.282516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:17:57.282537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:17:57.282579 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:17:57.282616 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:17:57.282652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:17:57.282679 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:17:57.282713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:17:57.282738 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:17:57.282772 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:17:57.282810 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:17:57.282835 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:17:57.282869 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:17:57.282900 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:17:57.282928 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:17:57.282961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:17:57.282986 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:17:57.283021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:17:57.283053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:17:57.283085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:17:57.283120 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:17:57.283152 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:17:57.283178 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:17:57.283218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:17:57.283251 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:17:57.283277 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:17:57.283312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:17:57.283383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:57.283409 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:17:57.283504 systemd-journald[250]: Collecting audit messages is disabled. Aug 13 00:17:57.283564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:17:57.283604 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:17:57.283640 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:17:57.283662 systemd-journald[250]: Journal started Aug 13 00:17:57.283716 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2f27eab272b09f8884ae8738c1933c) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:17:57.270096 systemd-modules-load[251]: Inserted module 'overlay' Aug 13 00:17:57.298420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:57.309403 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:17:57.310841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:17:57.317301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:17:57.317338 kernel: Bridge firewalling registered Aug 13 00:17:57.318420 systemd-modules-load[251]: Inserted module 'br_netfilter' Aug 13 00:17:57.325759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:17:57.336829 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:57.344152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:17:57.355612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:17:57.375689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:17:57.388876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:17:57.401571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:17:57.426324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:17:57.429310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:57.439811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:17:57.462372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:17:57.466149 dracut-cmdline[286]: dracut-dracut-053 Aug 13 00:17:57.470343 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:17:57.563517 systemd-resolved[291]: Positive Trust Anchors: Aug 13 00:17:57.563561 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:17:57.563662 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:17:57.615075 kernel: SCSI subsystem initialized Aug 13 00:17:57.622467 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:17:57.635473 kernel: iscsi: registered transport (tcp) Aug 13 00:17:57.658171 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:17:57.658289 kernel: QLogic iSCSI HBA Driver Aug 13 00:17:57.790387 kernel: random: crng init done Aug 13 00:17:57.789663 systemd-resolved[291]: Defaulting to hostname 'linux'. Aug 13 00:17:57.799692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:17:57.805291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:17:57.821409 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:17:57.832681 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:17:57.871180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:17:57.871254 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:17:57.873367 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:17:57.939414 kernel: raid6: neonx8 gen() 6707 MB/s Aug 13 00:17:57.956385 kernel: raid6: neonx4 gen() 6514 MB/s Aug 13 00:17:57.973382 kernel: raid6: neonx2 gen() 5440 MB/s Aug 13 00:17:57.990383 kernel: raid6: neonx1 gen() 3949 MB/s Aug 13 00:17:58.007381 kernel: raid6: int64x8 gen() 3791 MB/s Aug 13 00:17:58.024381 kernel: raid6: int64x4 gen() 3713 MB/s Aug 13 00:17:58.041382 kernel: raid6: int64x2 gen() 3581 MB/s Aug 13 00:17:58.059376 kernel: raid6: int64x1 gen() 2758 MB/s Aug 13 00:17:58.059417 kernel: raid6: using algorithm neonx8 gen() 6707 MB/s Aug 13 00:17:58.078353 kernel: raid6: .... xor() 4870 MB/s, rmw enabled Aug 13 00:17:58.078398 kernel: raid6: using neon recovery algorithm Aug 13 00:17:58.086389 kernel: xor: measuring software checksum speed Aug 13 00:17:58.088586 kernel: 8regs : 10249 MB/sec Aug 13 00:17:58.088619 kernel: 32regs : 11909 MB/sec Aug 13 00:17:58.089863 kernel: arm64_neon : 9580 MB/sec Aug 13 00:17:58.089896 kernel: xor: using function: 32regs (11909 MB/sec) Aug 13 00:17:58.176408 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:17:58.196535 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:17:58.209643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:17:58.244113 systemd-udevd[470]: Using default interface naming scheme 'v255'. Aug 13 00:17:58.252080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:17:58.265716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:17:58.302327 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Aug 13 00:17:58.365801 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:17:58.377809 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:17:58.500221 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:17:58.520896 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:17:58.563503 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:17:58.571709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:17:58.577742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:17:58.584204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:17:58.607620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:17:58.635336 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:17:58.720647 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:17:58.720710 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 13 00:17:58.733573 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:17:58.733933 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:17:58.733147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:17:58.733435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:58.745430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:58.749742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:17:58.756260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:58.762778 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:58.774383 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:dd:ae:54:f5:5f Aug 13 00:17:58.775153 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:17:58.778625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:58.799699 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:17:58.805251 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:17:58.813449 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:17:58.820891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:17:58.820999 kernel: GPT:9289727 != 16777215 Aug 13 00:17:58.823400 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:17:58.823439 kernel: GPT:9289727 != 16777215 Aug 13 00:17:58.823466 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:17:58.823492 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:58.829102 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:58.853663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:58.891746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:58.927861 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (515) Aug 13 00:17:58.964406 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (516) Aug 13 00:17:59.041879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:17:59.083569 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:17:59.110318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:17:59.128806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:17:59.133724 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:17:59.164718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:17:59.178464 disk-uuid[659]: Primary Header is updated. Aug 13 00:17:59.178464 disk-uuid[659]: Secondary Entries is updated. Aug 13 00:17:59.178464 disk-uuid[659]: Secondary Header is updated. Aug 13 00:17:59.188388 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:59.214379 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:59.224417 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:00.233785 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:00.234660 disk-uuid[660]: The operation has completed successfully. Aug 13 00:18:00.406074 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:18:00.408778 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:18:00.467673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:18:00.491142 sh[1002]: Success Aug 13 00:18:00.517400 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:18:00.627825 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:18:00.642574 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:18:00.643229 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:18:00.682803 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:18:00.682877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:00.682905 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:18:00.686202 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:18:00.686239 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:18:00.842403 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:18:00.877803 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:18:00.878281 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:18:00.890737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:18:00.894119 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:18:00.940422 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:00.940506 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:00.941851 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:00.950926 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:00.966787 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:18:00.972616 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:00.982408 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:18:00.997882 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:18:01.086074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:01.102677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:01.163852 systemd-networkd[1194]: lo: Link UP Aug 13 00:18:01.163879 systemd-networkd[1194]: lo: Gained carrier Aug 13 00:18:01.166442 systemd-networkd[1194]: Enumeration completed Aug 13 00:18:01.167404 systemd-networkd[1194]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:01.167411 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:01.168644 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:01.180662 systemd[1]: Reached target network.target - Network. Aug 13 00:18:01.186779 systemd-networkd[1194]: eth0: Link UP Aug 13 00:18:01.186786 systemd-networkd[1194]: eth0: Gained carrier Aug 13 00:18:01.186803 systemd-networkd[1194]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:01.218447 systemd-networkd[1194]: eth0: DHCPv4 address 172.31.19.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:01.502800 ignition[1125]: Ignition 2.19.0 Aug 13 00:18:01.503326 ignition[1125]: Stage: fetch-offline Aug 13 00:18:01.505001 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.505025 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.506659 ignition[1125]: Ignition finished successfully Aug 13 00:18:01.516236 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:01.533772 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:18:01.558888 ignition[1205]: Ignition 2.19.0 Aug 13 00:18:01.558918 ignition[1205]: Stage: fetch Aug 13 00:18:01.559586 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.559612 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.559765 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.571984 ignition[1205]: PUT result: OK Aug 13 00:18:01.579263 ignition[1205]: parsed url from cmdline: "" Aug 13 00:18:01.579282 ignition[1205]: no config URL provided Aug 13 00:18:01.579297 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:18:01.579323 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:18:01.579675 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.589767 ignition[1205]: PUT result: OK Aug 13 00:18:01.589845 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:18:01.596486 ignition[1205]: GET result: OK Aug 13 00:18:01.596989 ignition[1205]: parsing config with SHA512: a5c387b2d8061bc010c84486a011ce0bd6098cf90fae5e37c2d91eb62be4a216d23f1cf4a4c3e9ddc70fc2b8ea0ccb24443294d678dba9c3fb2e19f249f67b7d Aug 13 00:18:01.607118 unknown[1205]: fetched base config from "system" Aug 13 00:18:01.607148 unknown[1205]: fetched base config from "system" Aug 13 00:18:01.608034 ignition[1205]: fetch: fetch complete Aug 13 00:18:01.607162 unknown[1205]: fetched user config from "aws" Aug 13 00:18:01.608045 ignition[1205]: fetch: fetch passed Aug 13 00:18:01.608129 ignition[1205]: Ignition finished successfully Aug 13 00:18:01.616619 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:18:01.635220 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:18:01.658929 ignition[1211]: Ignition 2.19.0 Aug 13 00:18:01.659462 ignition[1211]: Stage: kargs Aug 13 00:18:01.660110 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.660135 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.660309 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.670560 ignition[1211]: PUT result: OK Aug 13 00:18:01.675223 ignition[1211]: kargs: kargs passed Aug 13 00:18:01.675537 ignition[1211]: Ignition finished successfully Aug 13 00:18:01.684536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:18:01.696830 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:18:01.725032 ignition[1217]: Ignition 2.19.0 Aug 13 00:18:01.725064 ignition[1217]: Stage: disks Aug 13 00:18:01.725873 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.725900 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.726074 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.735317 ignition[1217]: PUT result: OK Aug 13 00:18:01.744064 ignition[1217]: disks: disks passed Aug 13 00:18:01.744214 ignition[1217]: Ignition finished successfully Aug 13 00:18:01.747434 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:18:01.755220 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:01.758396 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:18:01.761772 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:01.764688 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:01.767590 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:01.791408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:18:01.831662 systemd-fsck[1226]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:18:01.838368 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:18:01.854484 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:18:01.953397 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:18:01.954082 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:18:01.957114 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:01.970679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:01.979011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:18:01.988242 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:18:01.988528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:18:01.988580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:02.007385 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1245) Aug 13 00:18:02.017369 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:02.017438 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:02.019507 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:02.022830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:18:02.037403 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:02.038629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:18:02.045829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:02.268502 systemd-networkd[1194]: eth0: Gained IPv6LL Aug 13 00:18:02.465558 initrd-setup-root[1269]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:18:02.476276 initrd-setup-root[1276]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:18:02.486060 initrd-setup-root[1283]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:18:02.496702 initrd-setup-root[1290]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:18:02.849052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:02.861794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:18:02.873710 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:18:02.890245 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:18:02.893062 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:02.928530 ignition[1357]: INFO : Ignition 2.19.0 Aug 13 00:18:02.928530 ignition[1357]: INFO : Stage: mount Aug 13 00:18:02.933425 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:02.933425 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:02.933425 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:02.942853 ignition[1357]: INFO : PUT result: OK Aug 13 00:18:02.949083 ignition[1357]: INFO : mount: mount passed Aug 13 00:18:02.951561 ignition[1357]: INFO : Ignition finished successfully Aug 13 00:18:02.956057 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:18:02.965322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:18:02.975820 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:18:02.995034 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:03.038584 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1370) Aug 13 00:18:03.042676 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:03.042720 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:03.044004 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:03.049689 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:03.052976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:03.098915 ignition[1387]: INFO : Ignition 2.19.0 Aug 13 00:18:03.101712 ignition[1387]: INFO : Stage: files Aug 13 00:18:03.104169 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:03.104169 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:03.109993 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:03.114489 ignition[1387]: INFO : PUT result: OK Aug 13 00:18:03.120015 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:18:03.124655 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:18:03.124655 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:18:03.166340 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:18:03.170159 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:18:03.174460 unknown[1387]: wrote ssh authorized keys file for user: core Aug 13 00:18:03.177138 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:18:03.180249 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:18:03.180249 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 13 00:18:03.267837 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:18:03.437645 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:18:03.437645 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:18:03.447923 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 00:18:03.662195 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:03.798428 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:03.826217 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 13 00:18:04.248596 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:18:04.601144 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:04.606189 ignition[1387]: INFO : files: files passed Aug 13 00:18:04.606189 ignition[1387]: INFO : Ignition finished successfully Aug 13 00:18:04.645795 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:18:04.656617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:18:04.668273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:18:04.682075 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:18:04.682285 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:18:04.701276 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:04.701276 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:04.710440 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:04.714873 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:04.721468 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:18:04.736754 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:18:04.782707 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:18:04.783111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:18:04.791677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:18:04.794423 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:18:04.797024 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:18:04.799312 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:18:04.841501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:04.864403 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:18:04.891457 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:04.894994 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:04.898305 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:18:04.908161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:18:04.908425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:04.912503 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:18:04.915680 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:18:04.918631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:18:04.936369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:04.939615 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:04.942955 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:18:04.945771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:04.959582 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:18:04.962581 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:18:04.965717 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:18:04.975178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:18:04.975461 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:04.981692 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:04.984746 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:04.987972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:18:04.988261 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:04.994653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:18:04.994972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:05.014637 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:18:05.014902 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:05.018464 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:18:05.018674 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:18:05.040845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:18:05.054407 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:18:05.060771 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:18:05.061066 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:05.067296 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:18:05.067563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:05.093604 ignition[1440]: INFO : Ignition 2.19.0 Aug 13 00:18:05.093604 ignition[1440]: INFO : Stage: umount Aug 13 00:18:05.102136 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:05.102136 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:05.102136 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:05.102136 ignition[1440]: INFO : PUT result: OK Aug 13 00:18:05.100518 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:18:05.101050 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:18:05.126270 ignition[1440]: INFO : umount: umount passed Aug 13 00:18:05.128896 ignition[1440]: INFO : Ignition finished successfully Aug 13 00:18:05.134776 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:18:05.136094 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:18:05.140424 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:18:05.140560 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:18:05.141146 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:18:05.141224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:18:05.144701 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:18:05.144791 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:18:05.145534 systemd[1]: Stopped target network.target - Network. Aug 13 00:18:05.151981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:18:05.152137 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:05.156627 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:18:05.157420 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:18:05.185143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:05.189762 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:18:05.193035 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:18:05.213156 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:18:05.213246 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:05.216667 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:18:05.216743 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:05.219469 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:18:05.219562 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:18:05.222286 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:18:05.222392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:05.225416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:18:05.228129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:18:05.242129 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:18:05.245885 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:18:05.246097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:18:05.266494 systemd-networkd[1194]: eth0: DHCPv6 lease lost Aug 13 00:18:05.267338 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:18:05.267535 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:05.280739 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:18:05.286312 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:18:05.291929 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:18:05.292276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:18:05.299436 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:18:05.299555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:05.320752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:18:05.328242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:18:05.328372 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:05.331923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:18:05.332010 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:05.334838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:18:05.334930 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:05.337817 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:18:05.337918 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:05.341715 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:05.386106 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:18:05.386579 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:18:05.395339 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:18:05.397410 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:05.402017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:18:05.402106 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:05.413511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:18:05.413605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:05.416385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:18:05.416478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:05.419417 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:18:05.419505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:05.422442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:05.422528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:05.455764 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:18:05.458424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:18:05.458535 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:05.461524 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:18:05.461623 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:05.465187 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:18:05.465269 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:05.469037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:05.469124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:05.504730 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:18:05.507711 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:18:05.512148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:18:05.528771 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:18:05.544613 systemd[1]: Switching root. Aug 13 00:18:05.589460 systemd-journald[250]: Journal stopped Aug 13 00:18:07.903098 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Aug 13 00:18:07.903233 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:18:07.903278 kernel: SELinux: policy capability open_perms=1 Aug 13 00:18:07.903309 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:18:07.903339 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:18:07.903396 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:18:07.903429 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:18:07.903460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:18:07.903491 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:18:07.903522 kernel: audit: type=1403 audit(1755044286.042:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:18:07.903563 systemd[1]: Successfully loaded SELinux policy in 76.016ms. Aug 13 00:18:07.903615 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.597ms. Aug 13 00:18:07.903649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:07.903680 systemd[1]: Detected virtualization amazon. Aug 13 00:18:07.903714 systemd[1]: Detected architecture arm64. Aug 13 00:18:07.903743 systemd[1]: Detected first boot. Aug 13 00:18:07.903775 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:07.903808 zram_generator::config[1483]: No configuration found. Aug 13 00:18:07.903841 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:18:07.903872 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:18:07.903902 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:18:07.903937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:07.903973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:18:07.904006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:18:07.904038 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:18:07.904070 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:18:07.904112 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:18:07.904145 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:18:07.904175 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:18:07.904204 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:18:07.904239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:07.904269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:07.904302 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:18:07.904335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:18:07.906453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:18:07.906507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:07.906537 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:18:07.906569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:07.906602 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:18:07.906640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:18:07.906682 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:07.906715 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:18:07.906747 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:07.906778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:07.906809 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:07.906841 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:07.906873 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:18:07.906907 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:18:07.906940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:07.906981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:07.907013 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:07.907045 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:18:07.907077 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:18:07.907107 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:18:07.907138 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:18:07.907168 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:18:07.907203 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:18:07.907235 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:18:07.907265 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:18:07.907295 systemd[1]: Reached target machines.target - Containers. Aug 13 00:18:07.907326 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:18:07.907379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:07.907412 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:07.907444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:18:07.907474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:07.907509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:07.907538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:07.907567 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:18:07.907598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:07.907636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:18:07.907668 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:18:07.908502 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:18:07.908552 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:18:07.908592 kernel: ACPI: bus type drm_connector registered Aug 13 00:18:07.908621 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:18:07.908651 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:07.908679 kernel: fuse: init (API version 7.39) Aug 13 00:18:07.908707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:07.908738 kernel: loop: module loaded Aug 13 00:18:07.908767 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:18:07.908796 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:18:07.908829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:07.908864 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:18:07.908893 systemd[1]: Stopped verity-setup.service. Aug 13 00:18:07.908922 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:18:07.908951 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:18:07.908980 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:18:07.909009 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:18:07.909038 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:18:07.909067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:18:07.909101 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:07.909130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:18:07.909159 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:18:07.909230 systemd-journald[1561]: Collecting audit messages is disabled. Aug 13 00:18:07.909282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:07.909317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:07.910410 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:07.910484 systemd-journald[1561]: Journal started Aug 13 00:18:07.910537 systemd-journald[1561]: Runtime Journal (/run/log/journal/ec2f27eab272b09f8884ae8738c1933c) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:07.254643 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:18:07.281152 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:18:07.282004 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:18:07.914508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:07.919370 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:07.928035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:07.928692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:07.934560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:18:07.941034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:18:07.941637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:18:07.950785 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:07.951089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:07.957113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:07.963620 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:18:07.970574 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:18:08.001072 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:18:08.015712 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:18:08.031038 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:18:08.037613 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:18:08.037689 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:08.047938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:18:08.063294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:18:08.071703 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:18:08.074997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:08.081753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:18:08.090749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:18:08.097100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:08.102104 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:18:08.108734 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:08.117838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:08.125691 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:18:08.134923 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:18:08.146443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:08.150265 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:18:08.155079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:18:08.159754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:18:08.187762 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:18:08.197555 systemd-journald[1561]: Time spent on flushing to /var/log/journal/ec2f27eab272b09f8884ae8738c1933c is 117.610ms for 914 entries. Aug 13 00:18:08.197555 systemd-journald[1561]: System Journal (/var/log/journal/ec2f27eab272b09f8884ae8738c1933c) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:18:08.342298 systemd-journald[1561]: Received client request to flush runtime journal. Aug 13 00:18:08.344498 kernel: loop0: detected capacity change from 0 to 52536 Aug 13 00:18:08.344581 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:18:08.240469 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:18:08.244160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:18:08.260131 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:18:08.300707 udevadm[1618]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:18:08.338056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:08.342991 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Aug 13 00:18:08.343017 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Aug 13 00:18:08.359885 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:18:08.368622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:08.381515 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:18:08.387059 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:18:08.390485 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:18:08.422503 kernel: loop1: detected capacity change from 0 to 114328 Aug 13 00:18:08.487949 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:18:08.505992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:08.557184 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:18:08.580072 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Aug 13 00:18:08.580117 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Aug 13 00:18:08.595847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:08.689399 kernel: loop3: detected capacity change from 0 to 211168 Aug 13 00:18:08.836438 kernel: loop4: detected capacity change from 0 to 52536 Aug 13 00:18:08.869421 kernel: loop5: detected capacity change from 0 to 114328 Aug 13 00:18:08.889393 kernel: loop6: detected capacity change from 0 to 114432 Aug 13 00:18:08.907496 kernel: loop7: detected capacity change from 0 to 211168 Aug 13 00:18:08.941964 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:18:08.942988 (sd-merge)[1641]: Merged extensions into '/usr'. Aug 13 00:18:08.952445 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:18:08.952480 systemd[1]: Reloading... Aug 13 00:18:09.118418 zram_generator::config[1663]: No configuration found. Aug 13 00:18:09.232731 ldconfig[1607]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:18:09.447732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:09.559201 systemd[1]: Reloading finished in 605 ms. Aug 13 00:18:09.603585 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:18:09.607145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:18:09.611922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:18:09.632650 systemd[1]: Starting ensure-sysext.service... Aug 13 00:18:09.637989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:09.653961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:09.673244 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:18:09.673470 systemd[1]: Reloading... Aug 13 00:18:09.693315 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:18:09.694095 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:18:09.695840 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:18:09.696343 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Aug 13 00:18:09.696498 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Aug 13 00:18:09.703671 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:09.703700 systemd-tmpfiles[1721]: Skipping /boot Aug 13 00:18:09.724038 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:09.724066 systemd-tmpfiles[1721]: Skipping /boot Aug 13 00:18:09.782494 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Aug 13 00:18:09.863408 zram_generator::config[1755]: No configuration found. Aug 13 00:18:10.022985 (udev-worker)[1787]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:10.216028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:10.291516 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1774) Aug 13 00:18:10.390790 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:18:10.392772 systemd[1]: Reloading finished in 718 ms. Aug 13 00:18:10.426538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:10.435408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:10.522529 systemd[1]: Finished ensure-sysext.service. Aug 13 00:18:10.586902 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:10.604629 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:18:10.613029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:10.617244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:10.624708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:10.639567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:10.647700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:10.654030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:10.659658 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:18:10.667866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:10.676265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:10.681781 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:18:10.688682 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:18:10.696678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:10.703076 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:18:10.707513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:10.707841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:10.714481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:10.714809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:10.727397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:10.752895 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:18:10.756497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:18:10.765913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:10.766696 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:10.767682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:10.798665 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:18:10.802418 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:10.804483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:10.809209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:10.841744 lvm[1936]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:10.857781 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:18:10.895008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:18:10.900518 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:18:10.903339 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:18:10.912868 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:18:10.929794 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:18:10.936477 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:18:10.938094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:10.956917 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:18:10.988875 augenrules[1960]: No rules Aug 13 00:18:11.003014 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:11.011780 lvm[1957]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:11.027190 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:18:11.042679 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:18:11.071595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:11.103834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:18:11.189952 systemd-networkd[1929]: lo: Link UP Aug 13 00:18:11.189976 systemd-networkd[1929]: lo: Gained carrier Aug 13 00:18:11.193058 systemd-networkd[1929]: Enumeration completed Aug 13 00:18:11.193294 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:11.194343 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:11.194383 systemd-networkd[1929]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:11.196813 systemd-networkd[1929]: eth0: Link UP Aug 13 00:18:11.197220 systemd-networkd[1929]: eth0: Gained carrier Aug 13 00:18:11.197277 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:11.207218 systemd-resolved[1930]: Positive Trust Anchors: Aug 13 00:18:11.207664 systemd-resolved[1930]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:11.207731 systemd-resolved[1930]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:11.211800 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:18:11.223504 systemd-networkd[1929]: eth0: DHCPv4 address 172.31.19.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:11.227440 systemd-resolved[1930]: Defaulting to hostname 'linux'. Aug 13 00:18:11.231968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:11.237649 systemd[1]: Reached target network.target - Network. Aug 13 00:18:11.241874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:11.244806 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:11.249534 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:18:11.255936 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:18:11.259619 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:18:11.263530 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:18:11.266947 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:18:11.270647 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:18:11.270877 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:11.273385 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:11.277521 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:18:11.283222 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:18:11.298268 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:18:11.304942 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:18:11.308142 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:11.310795 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:11.313309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:11.313413 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:11.322797 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:18:11.336242 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:18:11.355634 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:18:11.365742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:18:11.398401 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:18:11.403385 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:18:11.407536 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:18:11.413911 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:18:11.425671 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:18:11.434247 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:18:11.440723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:18:11.445905 jq[1983]: false Aug 13 00:18:11.451801 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:18:11.464454 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:18:11.471617 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:18:11.472633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:18:11.477773 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:18:11.486822 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:18:11.498128 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:18:11.499035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:18:11.545267 dbus-daemon[1982]: [system] SELinux support is enabled Aug 13 00:18:11.545755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:18:11.558381 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:18:11.558449 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:18:11.564680 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:18:11.564727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:18:11.580500 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1929 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:11.607703 extend-filesystems[1984]: Found loop4 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found loop5 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found loop6 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found loop7 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p1 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p2 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p3 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found usr Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p4 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p6 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p7 Aug 13 00:18:11.607703 extend-filesystems[1984]: Found nvme0n1p9 Aug 13 00:18:11.607703 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Aug 13 00:18:11.754729 tar[1998]: linux-arm64/LICENSE Aug 13 00:18:11.754729 tar[1998]: linux-arm64/helm Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: ---------------------------------------------------- Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: corporation. Support and training for ntp-4 are Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: available at https://www.nwtime.org/support Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: ---------------------------------------------------- Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: proto: precision = 0.108 usec (-23) Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: basedate set to 2025-07-31 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listen normally on 3 eth0 172.31.19.145:123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: bind(21) AF_INET6 fe80::4dd:aeff:fe54:f55f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4dd:aeff:fe54:f55f%2#123 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: failed to init interface for address fe80::4dd:aeff:fe54:f55f%2 Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:11.755798 ntpd[1987]: 13 Aug 00:18:11 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:11.609955 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:18:11.779021 jq[1996]: true Aug 13 00:18:11.650043 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:11.794057 update_engine[1994]: I20250813 00:18:11.688566 1994 main.cc:92] Flatcar Update Engine starting Aug 13 00:18:11.794057 update_engine[1994]: I20250813 00:18:11.696467 1994 update_check_scheduler.cc:74] Next update check in 7m16s Aug 13 00:18:11.689808 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:18:11.650141 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:11.805879 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Aug 13 00:18:11.690340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:18:11.650164 ntpd[1987]: ---------------------------------------------------- Aug 13 00:18:11.816186 extend-filesystems[2027]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:18:11.704180 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:18:11.650184 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:11.712732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:18:11.650204 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:11.760281 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:18:11.650224 ntpd[1987]: corporation. Support and training for ntp-4 are Aug 13 00:18:11.775130 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:18:11.650242 ntpd[1987]: available at https://www.nwtime.org/support Aug 13 00:18:11.838391 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:18:11.775587 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:18:11.650262 ntpd[1987]: ---------------------------------------------------- Aug 13 00:18:11.660955 ntpd[1987]: proto: precision = 0.108 usec (-23) Aug 13 00:18:11.664771 ntpd[1987]: basedate set to 2025-07-31 Aug 13 00:18:11.664812 ntpd[1987]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:11.673529 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:11.673664 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:11.675674 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:11.675776 ntpd[1987]: Listen normally on 3 eth0 172.31.19.145:123 Aug 13 00:18:11.675861 ntpd[1987]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:11.675951 ntpd[1987]: bind(21) AF_INET6 fe80::4dd:aeff:fe54:f55f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:11.676002 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4dd:aeff:fe54:f55f%2#123 Aug 13 00:18:11.676033 ntpd[1987]: failed to init interface for address fe80::4dd:aeff:fe54:f55f%2 Aug 13 00:18:11.676111 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:11.714235 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:11.714295 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:11.858937 jq[2020]: true Aug 13 00:18:11.954012 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:18:11.979808 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:18:11.980373 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:18:11.980373 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:18:11.980373 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:18:12.012774 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:18:11.983090 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:18:11.996623 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:18:12.090283 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1760) Aug 13 00:18:12.133285 coreos-metadata[1981]: Aug 13 00:18:12.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:12.148314 coreos-metadata[1981]: Aug 13 00:18:12.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:18:12.148314 coreos-metadata[1981]: Aug 13 00:18:12.147 INFO Fetch successful Aug 13 00:18:12.148314 coreos-metadata[1981]: Aug 13 00:18:12.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:18:12.149451 coreos-metadata[1981]: Aug 13 00:18:12.148 INFO Fetch successful Aug 13 00:18:12.149451 coreos-metadata[1981]: Aug 13 00:18:12.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:18:12.155751 coreos-metadata[1981]: Aug 13 00:18:12.153 INFO Fetch successful Aug 13 00:18:12.155751 coreos-metadata[1981]: Aug 13 00:18:12.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:18:12.170089 coreos-metadata[1981]: Aug 13 00:18:12.169 INFO Fetch successful Aug 13 00:18:12.170089 coreos-metadata[1981]: Aug 13 00:18:12.169 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:18:12.173483 bash[2065]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:12.172789 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:18:12.184105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:18:12.186591 coreos-metadata[1981]: Aug 13 00:18:12.186 INFO Fetch failed with 404: resource not found Aug 13 00:18:12.186591 coreos-metadata[1981]: Aug 13 00:18:12.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:18:12.196552 coreos-metadata[1981]: Aug 13 00:18:12.196 INFO Fetch successful Aug 13 00:18:12.196552 coreos-metadata[1981]: Aug 13 00:18:12.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:18:12.206957 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:18:12.207013 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 13 00:18:12.215811 coreos-metadata[1981]: Aug 13 00:18:12.210 INFO Fetch successful Aug 13 00:18:12.215811 coreos-metadata[1981]: Aug 13 00:18:12.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:18:12.214837 systemd[1]: Starting sshkeys.service... Aug 13 00:18:12.216671 coreos-metadata[1981]: Aug 13 00:18:12.216 INFO Fetch successful Aug 13 00:18:12.216671 coreos-metadata[1981]: Aug 13 00:18:12.216 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:18:12.216865 systemd-logind[1993]: New seat seat0. Aug 13 00:18:12.223013 coreos-metadata[1981]: Aug 13 00:18:12.222 INFO Fetch successful Aug 13 00:18:12.223013 coreos-metadata[1981]: Aug 13 00:18:12.222 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:18:12.224586 coreos-metadata[1981]: Aug 13 00:18:12.224 INFO Fetch successful Aug 13 00:18:12.234647 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:18:12.336174 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:18:12.337632 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2007 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:12.369184 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:18:12.389947 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:18:12.398631 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:18:12.409229 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:18:12.433768 containerd[2021]: time="2025-08-13T00:18:12.431873255Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:18:12.442974 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:18:12.457152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:18:12.511963 polkitd[2103]: Started polkitd version 121 Aug 13 00:18:12.589825 polkitd[2103]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:18:12.589947 polkitd[2103]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:18:12.612279 polkitd[2103]: Finished loading, compiling and executing 2 rules Aug 13 00:18:12.620048 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:18:12.629902 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:18:12.641033 polkitd[2103]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:18:12.652507 ntpd[1987]: bind(24) AF_INET6 fe80::4dd:aeff:fe54:f55f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:12.653148 ntpd[1987]: 13 Aug 00:18:12 ntpd[1987]: bind(24) AF_INET6 fe80::4dd:aeff:fe54:f55f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:12.653148 ntpd[1987]: 13 Aug 00:18:12 ntpd[1987]: unable to create socket on eth0 (6) for fe80::4dd:aeff:fe54:f55f%2#123 Aug 13 00:18:12.653148 ntpd[1987]: 13 Aug 00:18:12 ntpd[1987]: failed to init interface for address fe80::4dd:aeff:fe54:f55f%2 Aug 13 00:18:12.652565 ntpd[1987]: unable to create socket on eth0 (6) for fe80::4dd:aeff:fe54:f55f%2#123 Aug 13 00:18:12.652595 ntpd[1987]: failed to init interface for address fe80::4dd:aeff:fe54:f55f%2 Aug 13 00:18:12.673307 containerd[2021]: time="2025-08-13T00:18:12.668727492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.688576488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.688650060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.688690452Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.688996140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689031084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689145024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689173332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689487564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689523504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689555700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692375 containerd[2021]: time="2025-08-13T00:18:12.689600868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692910 containerd[2021]: time="2025-08-13T00:18:12.689791404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.692910 containerd[2021]: time="2025-08-13T00:18:12.690186096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:12.696778 containerd[2021]: time="2025-08-13T00:18:12.696719784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:12.698001 containerd[2021]: time="2025-08-13T00:18:12.697394232Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:18:12.698001 containerd[2021]: time="2025-08-13T00:18:12.697665396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:18:12.698001 containerd[2021]: time="2025-08-13T00:18:12.697769340Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:18:12.700775 systemd-networkd[1929]: eth0: Gained IPv6LL Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.717891648Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.717983784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.718019400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.718059096Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.718091124Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:18:12.718724 containerd[2021]: time="2025-08-13T00:18:12.718389672Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:18:12.715155 systemd-hostnamed[2007]: Hostname set to (transient) Aug 13 00:18:12.716736 systemd-resolved[1930]: System hostname changed to 'ip-172-31-19-145'. Aug 13 00:18:12.724047 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:18:12.731245 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:18:12.743827 containerd[2021]: time="2025-08-13T00:18:12.739809144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:18:12.743972 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:18:12.755670 containerd[2021]: time="2025-08-13T00:18:12.755584296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.755833440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.755903616Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.755943480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756002088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756040644Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756104196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756167172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756268236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756312672Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756379320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756424788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756458364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756488784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.757339 containerd[2021]: time="2025-08-13T00:18:12.756522732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756553200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756583812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756613884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756645588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756676104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756710892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756741456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756777876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756809712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756851472Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756900060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756930324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.758050 containerd[2021]: time="2025-08-13T00:18:12.756959136Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:18:12.757885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.767150736Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.767249892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.767307408Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.767339172Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.767401536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.770979576Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.771319284Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:18:12.778863 containerd[2021]: time="2025-08-13T00:18:12.773404320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:18:12.771009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:18:12.783745 containerd[2021]: time="2025-08-13T00:18:12.783484644Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:18:12.792380 containerd[2021]: time="2025-08-13T00:18:12.783688632Z" level=info msg="Connect containerd service" Aug 13 00:18:12.792380 containerd[2021]: time="2025-08-13T00:18:12.786500424Z" level=info msg="using legacy CRI server" Aug 13 00:18:12.792380 containerd[2021]: time="2025-08-13T00:18:12.786547224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:18:12.792380 containerd[2021]: time="2025-08-13T00:18:12.786764460Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.806800512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807205104Z" level=info msg="Start subscribing containerd event" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807284940Z" level=info msg="Start recovering state" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807443340Z" level=info msg="Start event monitor" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807469465Z" level=info msg="Start snapshots syncer" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807491485Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:18:12.808109 containerd[2021]: time="2025-08-13T00:18:12.807509569Z" level=info msg="Start streaming server" Aug 13 00:18:12.814175 containerd[2021]: time="2025-08-13T00:18:12.814122577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:18:12.829631 containerd[2021]: time="2025-08-13T00:18:12.826092529Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:18:12.829631 containerd[2021]: time="2025-08-13T00:18:12.829121437Z" level=info msg="containerd successfully booted in 0.410624s" Aug 13 00:18:12.832953 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:18:12.875260 locksmithd[2022]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:18:12.915894 coreos-metadata[2102]: Aug 13 00:18:12.913 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:12.918196 coreos-metadata[2102]: Aug 13 00:18:12.916 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:18:12.920049 coreos-metadata[2102]: Aug 13 00:18:12.919 INFO Fetch successful Aug 13 00:18:12.921376 coreos-metadata[2102]: Aug 13 00:18:12.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:18:12.925378 coreos-metadata[2102]: Aug 13 00:18:12.922 INFO Fetch successful Aug 13 00:18:12.932228 unknown[2102]: wrote ssh authorized keys file for user: core Aug 13 00:18:12.989889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:18:13.026460 update-ssh-keys[2197]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:13.030477 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:18:13.044692 systemd[1]: Finished sshkeys.service. Aug 13 00:18:13.076470 amazon-ssm-agent[2173]: Initializing new seelog logger Aug 13 00:18:13.080902 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Aug 13 00:18:13.080902 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.080902 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.080902 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 processing appconfig overrides Aug 13 00:18:13.083453 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 processing appconfig overrides Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 processing appconfig overrides Aug 13 00:18:13.085380 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO Proxy environment variables: Aug 13 00:18:13.094102 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.094102 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:13.094257 amazon-ssm-agent[2173]: 2025/08/13 00:18:13 processing appconfig overrides Aug 13 00:18:13.184860 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO https_proxy: Aug 13 00:18:13.285449 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO http_proxy: Aug 13 00:18:13.383320 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO no_proxy: Aug 13 00:18:13.482123 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:18:13.583374 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:18:13.681371 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO Agent will take identity from EC2 Aug 13 00:18:13.780461 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:13.879674 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:13.978957 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:14.027421 tar[1998]: linux-arm64/README.md Aug 13 00:18:14.061498 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:18:14.078151 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:18:14.178532 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 13 00:18:14.182873 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:18:14.182982 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:18:14.182982 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [Registrar] Starting registrar module Aug 13 00:18:14.182982 amazon-ssm-agent[2173]: 2025-08-13 00:18:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:18:14.183141 amazon-ssm-agent[2173]: 2025-08-13 00:18:14 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:18:14.183141 amazon-ssm-agent[2173]: 2025-08-13 00:18:14 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:18:14.184370 amazon-ssm-agent[2173]: 2025-08-13 00:18:14 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:18:14.184370 amazon-ssm-agent[2173]: 2025-08-13 00:18:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:18:14.277669 amazon-ssm-agent[2173]: 2025-08-13 00:18:14 INFO [CredentialRefresher] Next credential rotation will be in 30.0999714459 minutes Aug 13 00:18:14.436771 sshd_keygen[2025]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:18:14.480519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:18:14.493054 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:18:14.505217 systemd[1]: Started sshd@0-172.31.19.145:22-139.178.89.65:47298.service - OpenSSH per-connection server daemon (139.178.89.65:47298). Aug 13 00:18:14.519508 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:18:14.526414 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:18:14.546899 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:18:14.584108 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:18:14.600982 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:18:14.608899 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:18:14.614610 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:18:14.730977 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 47298 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:14.734286 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:14.753884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:18:14.763869 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:18:14.778076 systemd-logind[1993]: New session 1 of user core. Aug 13 00:18:14.796614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:18:14.812693 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:18:14.830613 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:18:15.054691 systemd[2230]: Queued start job for default target default.target. Aug 13 00:18:15.066617 systemd[2230]: Created slice app.slice - User Application Slice. Aug 13 00:18:15.067044 systemd[2230]: Reached target paths.target - Paths. Aug 13 00:18:15.067096 systemd[2230]: Reached target timers.target - Timers. Aug 13 00:18:15.069737 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:18:15.095805 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:18:15.096038 systemd[2230]: Reached target sockets.target - Sockets. Aug 13 00:18:15.096070 systemd[2230]: Reached target basic.target - Basic System. Aug 13 00:18:15.096165 systemd[2230]: Reached target default.target - Main User Target. Aug 13 00:18:15.096230 systemd[2230]: Startup finished in 253ms. Aug 13 00:18:15.096415 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:18:15.108494 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:18:15.212542 amazon-ssm-agent[2173]: 2025-08-13 00:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:18:15.277953 systemd[1]: Started sshd@1-172.31.19.145:22-139.178.89.65:47308.service - OpenSSH per-connection server daemon (139.178.89.65:47308). Aug 13 00:18:15.315963 amazon-ssm-agent[2173]: 2025-08-13 00:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2241) started Aug 13 00:18:15.402842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:15.408531 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:18:15.411808 systemd[1]: Startup finished in 1.179s (kernel) + 9.205s (initrd) + 9.445s (userspace) = 19.830s. Aug 13 00:18:15.415425 amazon-ssm-agent[2173]: 2025-08-13 00:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:18:15.423075 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:15.490222 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 47308 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:15.493776 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:15.502392 systemd-logind[1993]: New session 2 of user core. Aug 13 00:18:15.512833 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:18:15.641167 sshd[2244]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:15.646549 systemd[1]: sshd@1-172.31.19.145:22-139.178.89.65:47308.service: Deactivated successfully. Aug 13 00:18:15.651901 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:18:15.652596 ntpd[1987]: Listen normally on 7 eth0 [fe80::4dd:aeff:fe54:f55f%2]:123 Aug 13 00:18:15.653839 ntpd[1987]: 13 Aug 00:18:15 ntpd[1987]: Listen normally on 7 eth0 [fe80::4dd:aeff:fe54:f55f%2]:123 Aug 13 00:18:15.655321 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:18:15.657689 systemd-logind[1993]: Removed session 2. Aug 13 00:18:15.681917 systemd[1]: Started sshd@2-172.31.19.145:22-139.178.89.65:47322.service - OpenSSH per-connection server daemon (139.178.89.65:47322). Aug 13 00:18:15.867093 sshd[2269]: Accepted publickey for core from 139.178.89.65 port 47322 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:15.870183 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:15.877574 systemd-logind[1993]: New session 3 of user core. Aug 13 00:18:15.887678 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:18:16.010922 sshd[2269]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:16.017644 systemd[1]: sshd@2-172.31.19.145:22-139.178.89.65:47322.service: Deactivated successfully. Aug 13 00:18:16.020338 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:18:16.026273 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:18:16.029734 systemd-logind[1993]: Removed session 3. Aug 13 00:18:16.047903 systemd[1]: Started sshd@3-172.31.19.145:22-139.178.89.65:47330.service - OpenSSH per-connection server daemon (139.178.89.65:47330). Aug 13 00:18:16.229326 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 47330 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:16.233659 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:16.245197 systemd-logind[1993]: New session 4 of user core. Aug 13 00:18:16.255623 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:18:16.387518 sshd[2280]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:16.393790 systemd[1]: sshd@3-172.31.19.145:22-139.178.89.65:47330.service: Deactivated successfully. Aug 13 00:18:16.398597 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:18:16.402637 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:18:16.406334 systemd-logind[1993]: Removed session 4. Aug 13 00:18:16.423886 systemd[1]: Started sshd@4-172.31.19.145:22-139.178.89.65:47346.service - OpenSSH per-connection server daemon (139.178.89.65:47346). Aug 13 00:18:16.604382 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 47346 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:16.606280 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:16.617386 systemd-logind[1993]: New session 5 of user core. Aug 13 00:18:16.622278 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:18:16.648492 kubelet[2255]: E0813 00:18:16.647828 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:16.653412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:16.653765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:16.655478 systemd[1]: kubelet.service: Consumed 1.404s CPU time. Aug 13 00:18:16.743887 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:18:16.744548 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:16.767565 sudo[2293]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:16.792770 sshd[2288]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:16.799605 systemd[1]: sshd@4-172.31.19.145:22-139.178.89.65:47346.service: Deactivated successfully. Aug 13 00:18:16.803059 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:18:16.804325 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:18:16.806721 systemd-logind[1993]: Removed session 5. Aug 13 00:18:16.833876 systemd[1]: Started sshd@5-172.31.19.145:22-139.178.89.65:47362.service - OpenSSH per-connection server daemon (139.178.89.65:47362). Aug 13 00:18:17.002302 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 47362 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:17.004966 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:17.013414 systemd-logind[1993]: New session 6 of user core. Aug 13 00:18:17.019618 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:18:17.127072 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:18:17.128233 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:17.134414 sudo[2302]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:17.144533 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:18:17.145200 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:17.169883 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:17.173853 auditctl[2305]: No rules Aug 13 00:18:17.174587 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:18:17.175029 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:17.183074 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:17.235320 augenrules[2323]: No rules Aug 13 00:18:17.238058 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:17.240329 sudo[2301]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:17.263823 sshd[2298]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:17.270836 systemd[1]: sshd@5-172.31.19.145:22-139.178.89.65:47362.service: Deactivated successfully. Aug 13 00:18:17.274046 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:18:17.275629 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:18:17.277273 systemd-logind[1993]: Removed session 6. Aug 13 00:18:17.308844 systemd[1]: Started sshd@6-172.31.19.145:22-139.178.89.65:47376.service - OpenSSH per-connection server daemon (139.178.89.65:47376). Aug 13 00:18:17.477410 sshd[2331]: Accepted publickey for core from 139.178.89.65 port 47376 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:17.479939 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:17.487653 systemd-logind[1993]: New session 7 of user core. Aug 13 00:18:17.499614 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:18:17.604593 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:18:17.605256 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:18.121834 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:18:18.140164 (dockerd)[2349]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:18:18.568026 dockerd[2349]: time="2025-08-13T00:18:18.567564305Z" level=info msg="Starting up" Aug 13 00:18:18.811179 systemd-resolved[1930]: Clock change detected. Flushing caches. Aug 13 00:18:18.947678 dockerd[2349]: time="2025-08-13T00:18:18.947624524Z" level=info msg="Loading containers: start." Aug 13 00:18:19.104959 kernel: Initializing XFRM netlink socket Aug 13 00:18:19.140168 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:19.228667 systemd-networkd[1929]: docker0: Link UP Aug 13 00:18:19.255385 dockerd[2349]: time="2025-08-13T00:18:19.254333342Z" level=info msg="Loading containers: done." Aug 13 00:18:19.283296 dockerd[2349]: time="2025-08-13T00:18:19.283231730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:18:19.283657 dockerd[2349]: time="2025-08-13T00:18:19.283624238Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:18:19.283969 dockerd[2349]: time="2025-08-13T00:18:19.283938866Z" level=info msg="Daemon has completed initialization" Aug 13 00:18:19.344200 dockerd[2349]: time="2025-08-13T00:18:19.343967114Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:18:19.344310 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:18:20.475165 containerd[2021]: time="2025-08-13T00:18:20.475000456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:18:21.138609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816926412.mount: Deactivated successfully. Aug 13 00:18:22.568932 containerd[2021]: time="2025-08-13T00:18:22.568460550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:22.570701 containerd[2021]: time="2025-08-13T00:18:22.570626298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352094" Aug 13 00:18:22.572802 containerd[2021]: time="2025-08-13T00:18:22.572731746Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:22.578638 containerd[2021]: time="2025-08-13T00:18:22.578554758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:22.580899 containerd[2021]: time="2025-08-13T00:18:22.580829418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 2.105753782s" Aug 13 00:18:22.581516 containerd[2021]: time="2025-08-13T00:18:22.581056098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Aug 13 00:18:22.584388 containerd[2021]: time="2025-08-13T00:18:22.584304786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:18:24.018489 containerd[2021]: time="2025-08-13T00:18:24.018404513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:24.020643 containerd[2021]: time="2025-08-13T00:18:24.020510477Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537846" Aug 13 00:18:24.021381 containerd[2021]: time="2025-08-13T00:18:24.021242597Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:24.027413 containerd[2021]: time="2025-08-13T00:18:24.027300437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:24.029871 containerd[2021]: time="2025-08-13T00:18:24.029651273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.445046487s" Aug 13 00:18:24.029871 containerd[2021]: time="2025-08-13T00:18:24.029715005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Aug 13 00:18:24.031027 containerd[2021]: time="2025-08-13T00:18:24.030577937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:18:25.241013 containerd[2021]: time="2025-08-13T00:18:25.240943867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:25.242791 containerd[2021]: time="2025-08-13T00:18:25.242716459Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293524" Aug 13 00:18:25.243822 containerd[2021]: time="2025-08-13T00:18:25.243732043Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:25.251139 containerd[2021]: time="2025-08-13T00:18:25.251047243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:25.253935 containerd[2021]: time="2025-08-13T00:18:25.252725923Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.22209287s" Aug 13 00:18:25.253935 containerd[2021]: time="2025-08-13T00:18:25.252794011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Aug 13 00:18:25.253935 containerd[2021]: time="2025-08-13T00:18:25.253416679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:18:26.567874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395260696.mount: Deactivated successfully. Aug 13 00:18:26.972759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:26.980543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:27.352290 containerd[2021]: time="2025-08-13T00:18:27.350676418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:27.354172 containerd[2021]: time="2025-08-13T00:18:27.353603254Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199472" Aug 13 00:18:27.356136 containerd[2021]: time="2025-08-13T00:18:27.356068786Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:27.362495 containerd[2021]: time="2025-08-13T00:18:27.362435578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:27.364824 containerd[2021]: time="2025-08-13T00:18:27.364773274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 2.111306146s" Aug 13 00:18:27.365295 containerd[2021]: time="2025-08-13T00:18:27.364967050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 13 00:18:27.366391 containerd[2021]: time="2025-08-13T00:18:27.366124174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:18:27.419650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:27.434610 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:27.515149 kubelet[2567]: E0813 00:18:27.515023 2567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:27.522159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:27.522554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:27.994691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981768193.mount: Deactivated successfully. Aug 13 00:18:29.285220 containerd[2021]: time="2025-08-13T00:18:29.284639952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.290911 containerd[2021]: time="2025-08-13T00:18:29.289607184Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Aug 13 00:18:29.304401 containerd[2021]: time="2025-08-13T00:18:29.303138552Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.311446 containerd[2021]: time="2025-08-13T00:18:29.311075568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.313821 containerd[2021]: time="2025-08-13T00:18:29.313750596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.947565666s" Aug 13 00:18:29.313821 containerd[2021]: time="2025-08-13T00:18:29.313817784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 13 00:18:29.314525 containerd[2021]: time="2025-08-13T00:18:29.314488032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:18:29.791120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402687285.mount: Deactivated successfully. Aug 13 00:18:29.798185 containerd[2021]: time="2025-08-13T00:18:29.797741570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.799162 containerd[2021]: time="2025-08-13T00:18:29.799083674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:18:29.800141 containerd[2021]: time="2025-08-13T00:18:29.800054654Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.805989 containerd[2021]: time="2025-08-13T00:18:29.805932854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:29.808259 containerd[2021]: time="2025-08-13T00:18:29.808080770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.252526ms" Aug 13 00:18:29.808259 containerd[2021]: time="2025-08-13T00:18:29.808132706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:18:29.809112 containerd[2021]: time="2025-08-13T00:18:29.809074490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:18:30.378197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649010822.mount: Deactivated successfully. Aug 13 00:18:32.627607 containerd[2021]: time="2025-08-13T00:18:32.627518260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.673318 containerd[2021]: time="2025-08-13T00:18:32.673242580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Aug 13 00:18:32.722385 containerd[2021]: time="2025-08-13T00:18:32.722277257Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.778923 containerd[2021]: time="2025-08-13T00:18:32.778626125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.783968 containerd[2021]: time="2025-08-13T00:18:32.783907661Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.974701279s" Aug 13 00:18:32.784311 containerd[2021]: time="2025-08-13T00:18:32.784156637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 13 00:18:37.722907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:18:37.734134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:38.151415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:38.169663 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:38.251317 kubelet[2714]: E0813 00:18:38.251254 2714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:38.257230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:38.257794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:39.752857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:39.764427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:39.825036 systemd[1]: Reloading requested from client PID 2728 ('systemctl') (unit session-7.scope)... Aug 13 00:18:39.825264 systemd[1]: Reloading... Aug 13 00:18:40.070926 zram_generator::config[2772]: No configuration found. Aug 13 00:18:40.311662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:40.497633 systemd[1]: Reloading finished in 671 ms. Aug 13 00:18:40.600908 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:18:40.601113 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:18:40.603924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:40.611766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:40.943824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:40.959456 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:18:41.028237 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:41.028237 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:18:41.028237 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:41.028843 kubelet[2832]: I0813 00:18:41.028307 2832 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:18:42.658188 kubelet[2832]: I0813 00:18:42.658118 2832 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:18:42.658188 kubelet[2832]: I0813 00:18:42.658170 2832 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:18:42.658817 kubelet[2832]: I0813 00:18:42.658560 2832 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:18:42.707539 kubelet[2832]: E0813 00:18:42.707466 2832 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:18:42.708932 kubelet[2832]: I0813 00:18:42.708719 2832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:18:42.725657 kubelet[2832]: E0813 00:18:42.725598 2832 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:18:42.725657 kubelet[2832]: I0813 00:18:42.725651 2832 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:18:42.731055 kubelet[2832]: I0813 00:18:42.730979 2832 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:18:42.731690 kubelet[2832]: I0813 00:18:42.731641 2832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:18:42.732002 kubelet[2832]: I0813 00:18:42.731691 2832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-145","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:18:42.732172 kubelet[2832]: I0813 00:18:42.732146 2832 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:18:42.732172 kubelet[2832]: I0813 00:18:42.732168 2832 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:18:42.732560 kubelet[2832]: I0813 00:18:42.732512 2832 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:42.738651 kubelet[2832]: I0813 00:18:42.738596 2832 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:18:42.738651 kubelet[2832]: I0813 00:18:42.738646 2832 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:18:42.740406 kubelet[2832]: I0813 00:18:42.738700 2832 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:18:42.740406 kubelet[2832]: I0813 00:18:42.738729 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:18:42.744916 kubelet[2832]: E0813 00:18:42.744224 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:18:42.745130 kubelet[2832]: E0813 00:18:42.745088 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-145&limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:18:42.745749 kubelet[2832]: I0813 00:18:42.745718 2832 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:18:42.747199 kubelet[2832]: I0813 00:18:42.747155 2832 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:18:42.747564 kubelet[2832]: W0813 00:18:42.747544 2832 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:18:42.753823 kubelet[2832]: I0813 00:18:42.753790 2832 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:18:42.754088 kubelet[2832]: I0813 00:18:42.754067 2832 server.go:1289] "Started kubelet" Aug 13 00:18:42.754612 kubelet[2832]: I0813 00:18:42.754549 2832 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:18:42.762384 kubelet[2832]: I0813 00:18:42.762266 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:18:42.763731 kubelet[2832]: I0813 00:18:42.763134 2832 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:18:42.771920 kubelet[2832]: I0813 00:18:42.771150 2832 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:18:42.773097 kubelet[2832]: I0813 00:18:42.773050 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:18:42.774817 kubelet[2832]: I0813 00:18:42.774770 2832 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:18:42.776083 kubelet[2832]: I0813 00:18:42.776043 2832 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:18:42.776386 kubelet[2832]: E0813 00:18:42.776342 2832 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-145\" not found" Aug 13 00:18:42.776865 kubelet[2832]: I0813 00:18:42.776817 2832 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:18:42.777007 kubelet[2832]: I0813 00:18:42.776970 2832 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:18:42.780291 kubelet[2832]: E0813 00:18:42.777176 2832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.145:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.145:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-145.185b2b834ed1ba4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-145,UID:ip-172-31-19-145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-145,},FirstTimestamp:2025-08-13 00:18:42.754017866 +0000 UTC m=+1.787433609,LastTimestamp:2025-08-13 00:18:42.754017866 +0000 UTC m=+1.787433609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-145,}" Aug 13 00:18:42.781942 kubelet[2832]: E0813 00:18:42.781528 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:18:42.781942 kubelet[2832]: E0813 00:18:42.781690 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": dial tcp 172.31.19.145:6443: connect: connection refused" interval="200ms" Aug 13 00:18:42.782164 kubelet[2832]: I0813 00:18:42.782105 2832 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:18:42.788287 kubelet[2832]: I0813 00:18:42.787402 2832 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:18:42.788287 kubelet[2832]: I0813 00:18:42.787443 2832 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:18:42.817379 kubelet[2832]: I0813 00:18:42.817322 2832 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:18:42.822338 kubelet[2832]: E0813 00:18:42.817661 2832 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:18:42.823779 kubelet[2832]: I0813 00:18:42.823744 2832 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:18:42.823987 kubelet[2832]: I0813 00:18:42.823967 2832 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:18:42.824123 kubelet[2832]: I0813 00:18:42.824101 2832 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:18:42.824241 kubelet[2832]: I0813 00:18:42.824221 2832 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:18:42.824709 kubelet[2832]: E0813 00:18:42.824363 2832 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:18:42.829403 kubelet[2832]: E0813 00:18:42.829332 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:18:42.836793 kubelet[2832]: I0813 00:18:42.836758 2832 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:18:42.837140 kubelet[2832]: I0813 00:18:42.837116 2832 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:18:42.837269 kubelet[2832]: I0813 00:18:42.837250 2832 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:42.839546 kubelet[2832]: I0813 00:18:42.839509 2832 policy_none.go:49] "None policy: Start" Aug 13 00:18:42.839895 kubelet[2832]: I0813 00:18:42.839725 2832 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:18:42.839895 kubelet[2832]: I0813 00:18:42.839756 2832 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:18:42.849703 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:18:42.868790 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:18:42.877306 kubelet[2832]: E0813 00:18:42.877254 2832 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-145\" not found" Aug 13 00:18:42.881603 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:18:42.895411 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:18:42.899284 kubelet[2832]: E0813 00:18:42.898592 2832 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:18:42.899284 kubelet[2832]: I0813 00:18:42.898909 2832 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:18:42.899284 kubelet[2832]: I0813 00:18:42.898932 2832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:18:42.900163 kubelet[2832]: I0813 00:18:42.899836 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:18:42.902158 kubelet[2832]: E0813 00:18:42.902098 2832 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:18:42.902273 kubelet[2832]: E0813 00:18:42.902175 2832 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-145\" not found" Aug 13 00:18:42.945279 systemd[1]: Created slice kubepods-burstable-pod2db863c9e3bbf002d0e998fb2130b65c.slice - libcontainer container kubepods-burstable-pod2db863c9e3bbf002d0e998fb2130b65c.slice. Aug 13 00:18:42.964659 kubelet[2832]: E0813 00:18:42.964295 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:42.972749 systemd[1]: Created slice kubepods-burstable-pod75cb7c37d16c68c1073ab228a46d486a.slice - libcontainer container kubepods-burstable-pod75cb7c37d16c68c1073ab228a46d486a.slice. Aug 13 00:18:42.977395 kubelet[2832]: I0813 00:18:42.977354 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2db863c9e3bbf002d0e998fb2130b65c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-145\" (UID: \"2db863c9e3bbf002d0e998fb2130b65c\") " pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:42.977712 kubelet[2832]: I0813 00:18:42.977683 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:42.977987 kubelet[2832]: I0813 00:18:42.977958 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:42.978172 kubelet[2832]: I0813 00:18:42.978133 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:42.978388 kubelet[2832]: I0813 00:18:42.978347 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:42.978612 kubelet[2832]: I0813 00:18:42.978543 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:42.979580 kubelet[2832]: I0813 00:18:42.979392 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:42.979580 kubelet[2832]: I0813 00:18:42.979468 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:42.979580 kubelet[2832]: I0813 00:18:42.979513 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:42.980956 kubelet[2832]: E0813 00:18:42.980590 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:42.982548 kubelet[2832]: E0813 00:18:42.982469 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": dial tcp 172.31.19.145:6443: connect: connection refused" interval="400ms" Aug 13 00:18:42.987372 systemd[1]: Created slice kubepods-burstable-pod0451b8ad5c7aeada30025996aab9599a.slice - libcontainer container kubepods-burstable-pod0451b8ad5c7aeada30025996aab9599a.slice. Aug 13 00:18:42.991686 kubelet[2832]: E0813 00:18:42.991625 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:43.004967 kubelet[2832]: I0813 00:18:43.004419 2832 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:43.005569 kubelet[2832]: E0813 00:18:43.005495 2832 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.145:6443/api/v1/nodes\": dial tcp 172.31.19.145:6443: connect: connection refused" node="ip-172-31-19-145" Aug 13 00:18:43.208533 kubelet[2832]: I0813 00:18:43.208302 2832 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:43.209250 kubelet[2832]: E0813 00:18:43.208801 2832 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.145:6443/api/v1/nodes\": dial tcp 172.31.19.145:6443: connect: connection refused" node="ip-172-31-19-145" Aug 13 00:18:43.266222 containerd[2021]: time="2025-08-13T00:18:43.266156209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-145,Uid:2db863c9e3bbf002d0e998fb2130b65c,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:43.282446 containerd[2021]: time="2025-08-13T00:18:43.282379369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-145,Uid:75cb7c37d16c68c1073ab228a46d486a,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:43.299568 containerd[2021]: time="2025-08-13T00:18:43.299073301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-145,Uid:0451b8ad5c7aeada30025996aab9599a,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:43.383234 kubelet[2832]: E0813 00:18:43.383166 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": dial tcp 172.31.19.145:6443: connect: connection refused" interval="800ms" Aug 13 00:18:43.611154 kubelet[2832]: I0813 00:18:43.611065 2832 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:43.611626 kubelet[2832]: E0813 00:18:43.611552 2832 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.145:6443/api/v1/nodes\": dial tcp 172.31.19.145:6443: connect: connection refused" node="ip-172-31-19-145" Aug 13 00:18:43.627589 kubelet[2832]: E0813 00:18:43.627533 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:18:43.702210 kubelet[2832]: E0813 00:18:43.702148 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:18:43.800781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411590953.mount: Deactivated successfully. Aug 13 00:18:43.815601 containerd[2021]: time="2025-08-13T00:18:43.815520844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:43.817766 containerd[2021]: time="2025-08-13T00:18:43.817694188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:43.819692 containerd[2021]: time="2025-08-13T00:18:43.819591316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:18:43.821699 containerd[2021]: time="2025-08-13T00:18:43.821648368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:18:43.823955 containerd[2021]: time="2025-08-13T00:18:43.823861108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:43.826962 containerd[2021]: time="2025-08-13T00:18:43.826676920Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:43.828587 containerd[2021]: time="2025-08-13T00:18:43.828482548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:18:43.832945 containerd[2021]: time="2025-08-13T00:18:43.832787656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:43.838458 containerd[2021]: time="2025-08-13T00:18:43.837751312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.260859ms" Aug 13 00:18:43.842207 containerd[2021]: time="2025-08-13T00:18:43.842126236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.855487ms" Aug 13 00:18:43.849607 containerd[2021]: time="2025-08-13T00:18:43.849238960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.463803ms" Aug 13 00:18:43.876008 kubelet[2832]: E0813 00:18:43.875823 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-145&limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:18:44.060082 containerd[2021]: time="2025-08-13T00:18:44.059797561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:44.062981 containerd[2021]: time="2025-08-13T00:18:44.061305013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:44.063949 containerd[2021]: time="2025-08-13T00:18:44.062700853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:44.063949 containerd[2021]: time="2025-08-13T00:18:44.063069457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:44.063949 containerd[2021]: time="2025-08-13T00:18:44.063251749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.064894 containerd[2021]: time="2025-08-13T00:18:44.064690249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:44.064894 containerd[2021]: time="2025-08-13T00:18:44.064728937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.065404 containerd[2021]: time="2025-08-13T00:18:44.065107405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:44.065606 containerd[2021]: time="2025-08-13T00:18:44.065365441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.066181 containerd[2021]: time="2025-08-13T00:18:44.066010981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.067219 containerd[2021]: time="2025-08-13T00:18:44.067138801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.071931 containerd[2021]: time="2025-08-13T00:18:44.071649169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:44.113250 systemd[1]: Started cri-containerd-81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d.scope - libcontainer container 81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d. Aug 13 00:18:44.127714 systemd[1]: Started cri-containerd-f0b33086a9e5d98c91b760b054e872efdfdb6fd3410173d689f2fefba542bd2e.scope - libcontainer container f0b33086a9e5d98c91b760b054e872efdfdb6fd3410173d689f2fefba542bd2e. Aug 13 00:18:44.144092 systemd[1]: Started cri-containerd-50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b.scope - libcontainer container 50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b. Aug 13 00:18:44.161140 kubelet[2832]: E0813 00:18:44.160311 2832 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:18:44.184446 kubelet[2832]: E0813 00:18:44.184382 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": dial tcp 172.31.19.145:6443: connect: connection refused" interval="1.6s" Aug 13 00:18:44.254537 containerd[2021]: time="2025-08-13T00:18:44.254303618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-145,Uid:75cb7c37d16c68c1073ab228a46d486a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0b33086a9e5d98c91b760b054e872efdfdb6fd3410173d689f2fefba542bd2e\"" Aug 13 00:18:44.255205 containerd[2021]: time="2025-08-13T00:18:44.254814242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-145,Uid:0451b8ad5c7aeada30025996aab9599a,Namespace:kube-system,Attempt:0,} returns sandbox id \"81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d\"" Aug 13 00:18:44.270556 containerd[2021]: time="2025-08-13T00:18:44.270396662Z" level=info msg="CreateContainer within sandbox \"81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:18:44.273860 containerd[2021]: time="2025-08-13T00:18:44.273640142Z" level=info msg="CreateContainer within sandbox \"f0b33086a9e5d98c91b760b054e872efdfdb6fd3410173d689f2fefba542bd2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:18:44.289784 containerd[2021]: time="2025-08-13T00:18:44.289641890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-145,Uid:2db863c9e3bbf002d0e998fb2130b65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b\"" Aug 13 00:18:44.302025 containerd[2021]: time="2025-08-13T00:18:44.301838102Z" level=info msg="CreateContainer within sandbox \"50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:18:44.310289 containerd[2021]: time="2025-08-13T00:18:44.310171118Z" level=info msg="CreateContainer within sandbox \"81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424\"" Aug 13 00:18:44.311724 containerd[2021]: time="2025-08-13T00:18:44.311668178Z" level=info msg="StartContainer for \"e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424\"" Aug 13 00:18:44.327565 containerd[2021]: time="2025-08-13T00:18:44.327452102Z" level=info msg="CreateContainer within sandbox \"f0b33086a9e5d98c91b760b054e872efdfdb6fd3410173d689f2fefba542bd2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebfc5edcb85ad76c58b426b919c5f8dc2720a8d70e58426f2d084b1b4cc12e3c\"" Aug 13 00:18:44.328682 containerd[2021]: time="2025-08-13T00:18:44.328534718Z" level=info msg="StartContainer for \"ebfc5edcb85ad76c58b426b919c5f8dc2720a8d70e58426f2d084b1b4cc12e3c\"" Aug 13 00:18:44.347723 containerd[2021]: time="2025-08-13T00:18:44.347541434Z" level=info msg="CreateContainer within sandbox \"50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b\"" Aug 13 00:18:44.348509 containerd[2021]: time="2025-08-13T00:18:44.348469610Z" level=info msg="StartContainer for \"56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b\"" Aug 13 00:18:44.372612 systemd[1]: Started cri-containerd-e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424.scope - libcontainer container e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424. Aug 13 00:18:44.409594 systemd[1]: Started cri-containerd-ebfc5edcb85ad76c58b426b919c5f8dc2720a8d70e58426f2d084b1b4cc12e3c.scope - libcontainer container ebfc5edcb85ad76c58b426b919c5f8dc2720a8d70e58426f2d084b1b4cc12e3c. Aug 13 00:18:44.417527 kubelet[2832]: I0813 00:18:44.417459 2832 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:44.418026 kubelet[2832]: E0813 00:18:44.417958 2832 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.145:6443/api/v1/nodes\": dial tcp 172.31.19.145:6443: connect: connection refused" node="ip-172-31-19-145" Aug 13 00:18:44.445233 systemd[1]: Started cri-containerd-56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b.scope - libcontainer container 56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b. Aug 13 00:18:44.527919 containerd[2021]: time="2025-08-13T00:18:44.527585211Z" level=info msg="StartContainer for \"e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424\" returns successfully" Aug 13 00:18:44.543906 containerd[2021]: time="2025-08-13T00:18:44.543818427Z" level=info msg="StartContainer for \"ebfc5edcb85ad76c58b426b919c5f8dc2720a8d70e58426f2d084b1b4cc12e3c\" returns successfully" Aug 13 00:18:44.594778 containerd[2021]: time="2025-08-13T00:18:44.594240160Z" level=info msg="StartContainer for \"56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b\" returns successfully" Aug 13 00:18:44.844982 kubelet[2832]: E0813 00:18:44.844623 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:44.850690 kubelet[2832]: E0813 00:18:44.850567 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:44.856245 kubelet[2832]: E0813 00:18:44.855593 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:45.860194 kubelet[2832]: E0813 00:18:45.859584 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:45.860194 kubelet[2832]: E0813 00:18:45.860018 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:46.020707 kubelet[2832]: I0813 00:18:46.019978 2832 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:47.013539 kubelet[2832]: E0813 00:18:47.012854 2832 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:49.332737 kubelet[2832]: E0813 00:18:49.332672 2832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-145\" not found" node="ip-172-31-19-145" Aug 13 00:18:49.401521 kubelet[2832]: I0813 00:18:49.400483 2832 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-145" Aug 13 00:18:49.477566 kubelet[2832]: I0813 00:18:49.477523 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:49.492684 kubelet[2832]: E0813 00:18:49.492363 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-145\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:49.492684 kubelet[2832]: I0813 00:18:49.492407 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:49.500690 kubelet[2832]: E0813 00:18:49.500297 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-145\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:49.500690 kubelet[2832]: I0813 00:18:49.500345 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:49.503700 kubelet[2832]: E0813 00:18:49.503644 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-145\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:49.745246 kubelet[2832]: I0813 00:18:49.744929 2832 apiserver.go:52] "Watching apiserver" Aug 13 00:18:49.777645 kubelet[2832]: I0813 00:18:49.777607 2832 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:18:50.934999 kubelet[2832]: I0813 00:18:50.934146 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:51.955858 systemd[1]: Reloading requested from client PID 3119 ('systemctl') (unit session-7.scope)... Aug 13 00:18:51.956407 systemd[1]: Reloading... Aug 13 00:18:52.146950 zram_generator::config[3168]: No configuration found. Aug 13 00:18:52.380412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:52.594479 systemd[1]: Reloading finished in 637 ms. Aug 13 00:18:52.682416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:52.700121 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:18:52.700719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:52.700828 systemd[1]: kubelet.service: Consumed 2.561s CPU time, 127.7M memory peak, 0B memory swap peak. Aug 13 00:18:52.709423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:53.065196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:53.078770 (kubelet)[3219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:18:53.195469 kubelet[3219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:53.195469 kubelet[3219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:18:53.195469 kubelet[3219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:53.195469 kubelet[3219]: I0813 00:18:53.193412 3219 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:18:53.216240 kubelet[3219]: I0813 00:18:53.216178 3219 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:18:53.216240 kubelet[3219]: I0813 00:18:53.216224 3219 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:18:53.217299 kubelet[3219]: I0813 00:18:53.216640 3219 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:18:53.219246 kubelet[3219]: I0813 00:18:53.219194 3219 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:18:53.224000 kubelet[3219]: I0813 00:18:53.223626 3219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:18:53.227248 sudo[3233]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:18:53.228934 sudo[3233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:18:53.233875 kubelet[3219]: E0813 00:18:53.233690 3219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:18:53.233875 kubelet[3219]: I0813 00:18:53.233741 3219 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:18:53.240546 kubelet[3219]: I0813 00:18:53.240386 3219 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:18:53.241620 kubelet[3219]: I0813 00:18:53.241143 3219 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:18:53.241620 kubelet[3219]: I0813 00:18:53.241199 3219 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-145","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:18:53.241620 kubelet[3219]: I0813 00:18:53.241467 3219 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:18:53.241620 kubelet[3219]: I0813 00:18:53.241488 3219 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:18:53.241620 kubelet[3219]: I0813 00:18:53.241563 3219 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:53.244940 kubelet[3219]: I0813 00:18:53.242398 3219 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:18:53.244940 kubelet[3219]: I0813 00:18:53.244063 3219 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:18:53.244940 kubelet[3219]: I0813 00:18:53.244120 3219 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:18:53.244940 kubelet[3219]: I0813 00:18:53.244147 3219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:18:53.256867 kubelet[3219]: I0813 00:18:53.256824 3219 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:18:53.260899 kubelet[3219]: I0813 00:18:53.260831 3219 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:18:53.301126 kubelet[3219]: I0813 00:18:53.300832 3219 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:18:53.303007 kubelet[3219]: I0813 00:18:53.302977 3219 server.go:1289] "Started kubelet" Aug 13 00:18:53.309082 kubelet[3219]: I0813 00:18:53.308641 3219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:18:53.316799 kubelet[3219]: I0813 00:18:53.316584 3219 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:18:53.318366 kubelet[3219]: I0813 00:18:53.318262 3219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:18:53.320944 kubelet[3219]: I0813 00:18:53.319321 3219 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:18:53.331937 kubelet[3219]: I0813 00:18:53.331652 3219 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:18:53.337415 kubelet[3219]: I0813 00:18:53.336175 3219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:18:53.343816 kubelet[3219]: I0813 00:18:53.343779 3219 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:18:53.348305 kubelet[3219]: I0813 00:18:53.348269 3219 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:18:53.353662 kubelet[3219]: I0813 00:18:53.352107 3219 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:18:53.379633 kubelet[3219]: I0813 00:18:53.379593 3219 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:18:53.385239 kubelet[3219]: I0813 00:18:53.385035 3219 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:18:53.391455 kubelet[3219]: E0813 00:18:53.390569 3219 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:18:53.394674 kubelet[3219]: I0813 00:18:53.394538 3219 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:18:53.424667 kubelet[3219]: I0813 00:18:53.424346 3219 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:18:53.431046 kubelet[3219]: I0813 00:18:53.431000 3219 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:18:53.431520 kubelet[3219]: I0813 00:18:53.431248 3219 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:18:53.431943 kubelet[3219]: I0813 00:18:53.431288 3219 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:18:53.431943 kubelet[3219]: I0813 00:18:53.431845 3219 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:18:53.432262 kubelet[3219]: E0813 00:18:53.432131 3219 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:18:53.538488 kubelet[3219]: E0813 00:18:53.538311 3219 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:18:53.629211 kubelet[3219]: I0813 00:18:53.628550 3219 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:18:53.629211 kubelet[3219]: I0813 00:18:53.628584 3219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:18:53.629211 kubelet[3219]: I0813 00:18:53.628619 3219 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:53.629211 kubelet[3219]: I0813 00:18:53.628833 3219 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:18:53.629211 kubelet[3219]: I0813 00:18:53.628853 3219 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:18:53.630103 kubelet[3219]: I0813 00:18:53.630012 3219 policy_none.go:49] "None policy: Start" Aug 13 00:18:53.630103 kubelet[3219]: I0813 00:18:53.630054 3219 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:18:53.630386 kubelet[3219]: I0813 00:18:53.630192 3219 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:18:53.631030 kubelet[3219]: I0813 00:18:53.630780 3219 state_mem.go:75] "Updated machine memory state" Aug 13 00:18:53.642976 kubelet[3219]: E0813 00:18:53.642346 3219 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:18:53.642976 kubelet[3219]: I0813 00:18:53.642619 3219 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:18:53.642976 kubelet[3219]: I0813 00:18:53.642638 3219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:18:53.643218 kubelet[3219]: I0813 00:18:53.642997 3219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:18:53.652947 kubelet[3219]: E0813 00:18:53.651834 3219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:18:53.742735 kubelet[3219]: I0813 00:18:53.742465 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:53.744229 kubelet[3219]: I0813 00:18:53.743579 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:53.746716 kubelet[3219]: I0813 00:18:53.746353 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.762064 kubelet[3219]: I0813 00:18:53.761672 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:53.766357 kubelet[3219]: I0813 00:18:53.764239 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.767106 kubelet[3219]: I0813 00:18:53.767056 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.767320 kubelet[3219]: I0813 00:18:53.767296 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2db863c9e3bbf002d0e998fb2130b65c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-145\" (UID: \"2db863c9e3bbf002d0e998fb2130b65c\") " pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:53.768035 kubelet[3219]: I0813 00:18:53.764992 3219 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-145" Aug 13 00:18:53.768516 kubelet[3219]: I0813 00:18:53.767996 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:53.769947 kubelet[3219]: I0813 00:18:53.768775 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75cb7c37d16c68c1073ab228a46d486a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-145\" (UID: \"75cb7c37d16c68c1073ab228a46d486a\") " pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:53.769947 kubelet[3219]: I0813 00:18:53.769743 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.769947 kubelet[3219]: I0813 00:18:53.769809 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.769947 kubelet[3219]: I0813 00:18:53.769850 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0451b8ad5c7aeada30025996aab9599a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-145\" (UID: \"0451b8ad5c7aeada30025996aab9599a\") " pod="kube-system/kube-controller-manager-ip-172-31-19-145" Aug 13 00:18:53.786126 kubelet[3219]: E0813 00:18:53.785460 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-145\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:53.793787 kubelet[3219]: I0813 00:18:53.793068 3219 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-145" Aug 13 00:18:53.793787 kubelet[3219]: I0813 00:18:53.793176 3219 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-145" Aug 13 00:18:54.248120 sudo[3233]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:54.251302 kubelet[3219]: I0813 00:18:54.250444 3219 apiserver.go:52] "Watching apiserver" Aug 13 00:18:54.340664 kubelet[3219]: I0813 00:18:54.340561 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-145" podStartSLOduration=4.340539036 podStartE2EDuration="4.340539036s" podCreationTimestamp="2025-08-13 00:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:18:54.325695684 +0000 UTC m=+1.233677911" watchObservedRunningTime="2025-08-13 00:18:54.340539036 +0000 UTC m=+1.248521251" Aug 13 00:18:54.352465 kubelet[3219]: I0813 00:18:54.352365 3219 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:18:54.365600 kubelet[3219]: I0813 00:18:54.365310 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-145" podStartSLOduration=1.36504216 podStartE2EDuration="1.36504216s" podCreationTimestamp="2025-08-13 00:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:18:54.341376936 +0000 UTC m=+1.249359187" watchObservedRunningTime="2025-08-13 00:18:54.36504216 +0000 UTC m=+1.273024399" Aug 13 00:18:54.401172 kubelet[3219]: I0813 00:18:54.400073 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-145" podStartSLOduration=1.400027368 podStartE2EDuration="1.400027368s" podCreationTimestamp="2025-08-13 00:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:18:54.367127232 +0000 UTC m=+1.275109483" watchObservedRunningTime="2025-08-13 00:18:54.400027368 +0000 UTC m=+1.308009607" Aug 13 00:18:54.489711 kubelet[3219]: I0813 00:18:54.488269 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:54.489711 kubelet[3219]: I0813 00:18:54.488315 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:54.508227 kubelet[3219]: E0813 00:18:54.508080 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-145\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-145" Aug 13 00:18:54.510650 kubelet[3219]: E0813 00:18:54.509847 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-145\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-145" Aug 13 00:18:56.249234 kubelet[3219]: I0813 00:18:56.249003 3219 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:18:56.253181 kubelet[3219]: I0813 00:18:56.252937 3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:18:56.253286 containerd[2021]: time="2025-08-13T00:18:56.251102989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:18:56.743311 update_engine[1994]: I20250813 00:18:56.743141 1994 update_attempter.cc:509] Updating boot flags... Aug 13 00:18:56.916081 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3283) Aug 13 00:18:57.512935 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3282) Aug 13 00:18:57.632643 systemd[1]: Created slice kubepods-besteffort-pod6a3e3430_5653_4dcb_b04f_a1d4bf36178d.slice - libcontainer container kubepods-besteffort-pod6a3e3430_5653_4dcb_b04f_a1d4bf36178d.slice. Aug 13 00:18:57.700938 kubelet[3219]: I0813 00:18:57.698607 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-xtables-lock\") pod \"kube-proxy-77l7p\" (UID: \"6a3e3430-5653-4dcb-b04f-a1d4bf36178d\") " pod="kube-system/kube-proxy-77l7p" Aug 13 00:18:57.700938 kubelet[3219]: I0813 00:18:57.698694 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2spbx\" (UniqueName: \"kubernetes.io/projected/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-api-access-2spbx\") pod \"kube-proxy-77l7p\" (UID: \"6a3e3430-5653-4dcb-b04f-a1d4bf36178d\") " pod="kube-system/kube-proxy-77l7p" Aug 13 00:18:57.700938 kubelet[3219]: I0813 00:18:57.698743 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-proxy\") pod \"kube-proxy-77l7p\" (UID: \"6a3e3430-5653-4dcb-b04f-a1d4bf36178d\") " pod="kube-system/kube-proxy-77l7p" Aug 13 00:18:57.700938 kubelet[3219]: I0813 00:18:57.698778 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-lib-modules\") pod \"kube-proxy-77l7p\" (UID: \"6a3e3430-5653-4dcb-b04f-a1d4bf36178d\") " pod="kube-system/kube-proxy-77l7p" Aug 13 00:18:57.715285 kubelet[3219]: E0813 00:18:57.714150 3219 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-19-145\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-145' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Aug 13 00:18:57.715285 kubelet[3219]: E0813 00:18:57.714277 3219 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-19-145\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-145' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Aug 13 00:18:58.083378 systemd[1]: Created slice kubepods-besteffort-podad5613d6_bb3a_4b38_851e_500e0b9f338e.slice - libcontainer container kubepods-besteffort-podad5613d6_bb3a_4b38_851e_500e0b9f338e.slice. Aug 13 00:18:58.099570 kubelet[3219]: E0813 00:18:58.097505 3219 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-19-145\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-145' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Aug 13 00:18:58.100515 kubelet[3219]: I0813 00:18:58.100442 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrsd\" (UniqueName: \"kubernetes.io/projected/ad5613d6-bb3a-4b38-851e-500e0b9f338e-kube-api-access-tmrsd\") pod \"cilium-operator-6c4d7847fc-sj22q\" (UID: \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\") " pod="kube-system/cilium-operator-6c4d7847fc-sj22q" Aug 13 00:18:58.100658 kubelet[3219]: I0813 00:18:58.100539 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad5613d6-bb3a-4b38-851e-500e0b9f338e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sj22q\" (UID: \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\") " pod="kube-system/cilium-operator-6c4d7847fc-sj22q" Aug 13 00:18:58.114744 systemd[1]: Created slice kubepods-burstable-podbe58da57_708a_440a_9d4d_10badcc9f077.slice - libcontainer container kubepods-burstable-podbe58da57_708a_440a_9d4d_10badcc9f077.slice. Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201244 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-etc-cni-netd\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201310 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-xtables-lock\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201349 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-kernel\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201386 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-run\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201421 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-cgroup\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.201953 kubelet[3219]: I0813 00:18:58.201455 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-hubble-tls\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201514 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-bpf-maps\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201556 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cni-path\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201593 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-lib-modules\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201630 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be58da57-708a-440a-9d4d-10badcc9f077-clustermesh-secrets\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201677 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be58da57-708a-440a-9d4d-10badcc9f077-cilium-config-path\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202419 kubelet[3219]: I0813 00:18:58.201717 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqbp\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-kube-api-access-fgqbp\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202729 kubelet[3219]: I0813 00:18:58.201757 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-net\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.202729 kubelet[3219]: I0813 00:18:58.201815 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-hostproc\") pod \"cilium-zx6wm\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " pod="kube-system/cilium-zx6wm" Aug 13 00:18:58.800731 kubelet[3219]: E0813 00:18:58.800631 3219 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:18:58.801616 kubelet[3219]: E0813 00:18:58.801445 3219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-proxy podName:6a3e3430-5653-4dcb-b04f-a1d4bf36178d nodeName:}" failed. No retries permitted until 2025-08-13 00:18:59.301373362 +0000 UTC m=+6.209355589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-proxy") pod "kube-proxy-77l7p" (UID: "6a3e3430-5653-4dcb-b04f-a1d4bf36178d") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:18:58.919541 kubelet[3219]: E0813 00:18:58.919383 3219 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:18:58.919541 kubelet[3219]: E0813 00:18:58.919432 3219 projected.go:194] Error preparing data for projected volume kube-api-access-2spbx for pod kube-system/kube-proxy-77l7p: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:18:58.919541 kubelet[3219]: E0813 00:18:58.919538 3219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-api-access-2spbx podName:6a3e3430-5653-4dcb-b04f-a1d4bf36178d nodeName:}" failed. No retries permitted until 2025-08-13 00:18:59.419510663 +0000 UTC m=+6.327492890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2spbx" (UniqueName: "kubernetes.io/projected/6a3e3430-5653-4dcb-b04f-a1d4bf36178d-kube-api-access-2spbx") pod "kube-proxy-77l7p" (UID: "6a3e3430-5653-4dcb-b04f-a1d4bf36178d") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:18:58.998582 containerd[2021]: time="2025-08-13T00:18:58.997594627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sj22q,Uid:ad5613d6-bb3a-4b38-851e-500e0b9f338e,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:59.052676 containerd[2021]: time="2025-08-13T00:18:59.051577323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:59.052676 containerd[2021]: time="2025-08-13T00:18:59.052422879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:59.052676 containerd[2021]: time="2025-08-13T00:18:59.052481187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.053197 containerd[2021]: time="2025-08-13T00:18:59.052651647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.095261 systemd[1]: Started cri-containerd-edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05.scope - libcontainer container edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05. Aug 13 00:18:59.167206 containerd[2021]: time="2025-08-13T00:18:59.167148868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sj22q,Uid:ad5613d6-bb3a-4b38-851e-500e0b9f338e,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\"" Aug 13 00:18:59.170348 containerd[2021]: time="2025-08-13T00:18:59.170277724Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:18:59.323202 containerd[2021]: time="2025-08-13T00:18:59.323015009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zx6wm,Uid:be58da57-708a-440a-9d4d-10badcc9f077,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:59.371462 containerd[2021]: time="2025-08-13T00:18:59.371070761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:59.371462 containerd[2021]: time="2025-08-13T00:18:59.371183237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:59.371462 containerd[2021]: time="2025-08-13T00:18:59.371220401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.371728 containerd[2021]: time="2025-08-13T00:18:59.371555189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.413284 systemd[1]: run-containerd-runc-k8s.io-03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590-runc.oaQ3Wk.mount: Deactivated successfully. Aug 13 00:18:59.427235 systemd[1]: Started cri-containerd-03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590.scope - libcontainer container 03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590. Aug 13 00:18:59.470522 containerd[2021]: time="2025-08-13T00:18:59.470460917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zx6wm,Uid:be58da57-708a-440a-9d4d-10badcc9f077,Namespace:kube-system,Attempt:0,} returns sandbox id \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\"" Aug 13 00:18:59.753265 containerd[2021]: time="2025-08-13T00:18:59.753203911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77l7p,Uid:6a3e3430-5653-4dcb-b04f-a1d4bf36178d,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:59.794603 containerd[2021]: time="2025-08-13T00:18:59.794204911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:59.794603 containerd[2021]: time="2025-08-13T00:18:59.794303119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:59.794603 containerd[2021]: time="2025-08-13T00:18:59.794331907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.794603 containerd[2021]: time="2025-08-13T00:18:59.794526019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:59.824305 systemd[1]: Started cri-containerd-573f70b91cfd66f01d65f89e7d95fe6f593a97bf52dcb6d9b4fc8f441b61f8ac.scope - libcontainer container 573f70b91cfd66f01d65f89e7d95fe6f593a97bf52dcb6d9b4fc8f441b61f8ac. Aug 13 00:18:59.874037 containerd[2021]: time="2025-08-13T00:18:59.873948415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77l7p,Uid:6a3e3430-5653-4dcb-b04f-a1d4bf36178d,Namespace:kube-system,Attempt:0,} returns sandbox id \"573f70b91cfd66f01d65f89e7d95fe6f593a97bf52dcb6d9b4fc8f441b61f8ac\"" Aug 13 00:18:59.886372 containerd[2021]: time="2025-08-13T00:18:59.886306856Z" level=info msg="CreateContainer within sandbox \"573f70b91cfd66f01d65f89e7d95fe6f593a97bf52dcb6d9b4fc8f441b61f8ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:18:59.919961 containerd[2021]: time="2025-08-13T00:18:59.919845524Z" level=info msg="CreateContainer within sandbox \"573f70b91cfd66f01d65f89e7d95fe6f593a97bf52dcb6d9b4fc8f441b61f8ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"618bd4d8667961a39babfc34397860c18e09fdf217ead4fc3172342cf5b686e9\"" Aug 13 00:18:59.922125 containerd[2021]: time="2025-08-13T00:18:59.921063680Z" level=info msg="StartContainer for \"618bd4d8667961a39babfc34397860c18e09fdf217ead4fc3172342cf5b686e9\"" Aug 13 00:18:59.971472 systemd[1]: Started cri-containerd-618bd4d8667961a39babfc34397860c18e09fdf217ead4fc3172342cf5b686e9.scope - libcontainer container 618bd4d8667961a39babfc34397860c18e09fdf217ead4fc3172342cf5b686e9. Aug 13 00:19:00.031419 containerd[2021]: time="2025-08-13T00:19:00.031066756Z" level=info msg="StartContainer for \"618bd4d8667961a39babfc34397860c18e09fdf217ead4fc3172342cf5b686e9\" returns successfully" Aug 13 00:19:00.729914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905800332.mount: Deactivated successfully. Aug 13 00:19:01.415357 containerd[2021]: time="2025-08-13T00:19:01.415267267Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:01.417543 containerd[2021]: time="2025-08-13T00:19:01.417439075Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 13 00:19:01.420862 containerd[2021]: time="2025-08-13T00:19:01.420233671Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:01.428966 containerd[2021]: time="2025-08-13T00:19:01.428870683Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.258514503s" Aug 13 00:19:01.429385 containerd[2021]: time="2025-08-13T00:19:01.429168895Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:19:01.438228 containerd[2021]: time="2025-08-13T00:19:01.438133627Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:19:01.447017 containerd[2021]: time="2025-08-13T00:19:01.446927383Z" level=info msg="CreateContainer within sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:19:01.472584 containerd[2021]: time="2025-08-13T00:19:01.472527271Z" level=info msg="CreateContainer within sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\"" Aug 13 00:19:01.475370 containerd[2021]: time="2025-08-13T00:19:01.473386483Z" level=info msg="StartContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\"" Aug 13 00:19:01.539228 systemd[1]: Started cri-containerd-ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2.scope - libcontainer container ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2. Aug 13 00:19:01.596630 containerd[2021]: time="2025-08-13T00:19:01.595477088Z" level=info msg="StartContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" returns successfully" Aug 13 00:19:02.593246 kubelet[3219]: I0813 00:19:02.592969 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77l7p" podStartSLOduration=5.592941729 podStartE2EDuration="5.592941729s" podCreationTimestamp="2025-08-13 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:00.550170907 +0000 UTC m=+7.458153146" watchObservedRunningTime="2025-08-13 00:19:02.592941729 +0000 UTC m=+9.500924076" Aug 13 00:19:07.803269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945505807.mount: Deactivated successfully. Aug 13 00:19:10.503957 containerd[2021]: time="2025-08-13T00:19:10.503529196Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:10.505528 containerd[2021]: time="2025-08-13T00:19:10.505463956Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 13 00:19:10.508373 containerd[2021]: time="2025-08-13T00:19:10.508243600Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:10.511919 containerd[2021]: time="2025-08-13T00:19:10.511603120Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.073377285s" Aug 13 00:19:10.511919 containerd[2021]: time="2025-08-13T00:19:10.511670680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:19:10.521466 containerd[2021]: time="2025-08-13T00:19:10.521391016Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:19:10.548055 containerd[2021]: time="2025-08-13T00:19:10.547957540Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\"" Aug 13 00:19:10.549493 containerd[2021]: time="2025-08-13T00:19:10.549097408Z" level=info msg="StartContainer for \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\"" Aug 13 00:19:10.609229 systemd[1]: Started cri-containerd-f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a.scope - libcontainer container f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a. Aug 13 00:19:10.657068 containerd[2021]: time="2025-08-13T00:19:10.656909297Z" level=info msg="StartContainer for \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\" returns successfully" Aug 13 00:19:10.687308 systemd[1]: cri-containerd-f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a.scope: Deactivated successfully. Aug 13 00:19:11.435047 containerd[2021]: time="2025-08-13T00:19:11.433713749Z" level=info msg="shim disconnected" id=f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a namespace=k8s.io Aug 13 00:19:11.435047 containerd[2021]: time="2025-08-13T00:19:11.433790573Z" level=warning msg="cleaning up after shim disconnected" id=f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a namespace=k8s.io Aug 13 00:19:11.435047 containerd[2021]: time="2025-08-13T00:19:11.433812569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:11.536775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a-rootfs.mount: Deactivated successfully. Aug 13 00:19:11.582155 containerd[2021]: time="2025-08-13T00:19:11.582070818Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:19:11.617424 kubelet[3219]: I0813 00:19:11.617038 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sj22q" podStartSLOduration=11.354182175 podStartE2EDuration="13.61701651s" podCreationTimestamp="2025-08-13 00:18:58 +0000 UTC" firstStartedPulling="2025-08-13 00:18:59.169479256 +0000 UTC m=+6.077461483" lastFinishedPulling="2025-08-13 00:19:01.432313603 +0000 UTC m=+8.340295818" observedRunningTime="2025-08-13 00:19:02.594974385 +0000 UTC m=+9.502956624" watchObservedRunningTime="2025-08-13 00:19:11.61701651 +0000 UTC m=+18.524998725" Aug 13 00:19:11.626177 containerd[2021]: time="2025-08-13T00:19:11.626112942Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\"" Aug 13 00:19:11.627670 containerd[2021]: time="2025-08-13T00:19:11.627346626Z" level=info msg="StartContainer for \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\"" Aug 13 00:19:11.686215 systemd[1]: Started cri-containerd-3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50.scope - libcontainer container 3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50. Aug 13 00:19:11.733834 containerd[2021]: time="2025-08-13T00:19:11.733646490Z" level=info msg="StartContainer for \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\" returns successfully" Aug 13 00:19:11.760373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:19:11.761615 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:19:11.762105 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:19:11.772579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:19:11.773041 systemd[1]: cri-containerd-3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50.scope: Deactivated successfully. Aug 13 00:19:11.819844 containerd[2021]: time="2025-08-13T00:19:11.819680215Z" level=info msg="shim disconnected" id=3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50 namespace=k8s.io Aug 13 00:19:11.820471 containerd[2021]: time="2025-08-13T00:19:11.819922615Z" level=warning msg="cleaning up after shim disconnected" id=3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50 namespace=k8s.io Aug 13 00:19:11.820471 containerd[2021]: time="2025-08-13T00:19:11.820027987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:11.820774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:19:12.537237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50-rootfs.mount: Deactivated successfully. Aug 13 00:19:12.590782 containerd[2021]: time="2025-08-13T00:19:12.590610367Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:19:12.632718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780850895.mount: Deactivated successfully. Aug 13 00:19:12.645134 containerd[2021]: time="2025-08-13T00:19:12.644536339Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\"" Aug 13 00:19:12.651134 containerd[2021]: time="2025-08-13T00:19:12.651055459Z" level=info msg="StartContainer for \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\"" Aug 13 00:19:12.703238 systemd[1]: Started cri-containerd-5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964.scope - libcontainer container 5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964. Aug 13 00:19:12.759474 containerd[2021]: time="2025-08-13T00:19:12.759160675Z" level=info msg="StartContainer for \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\" returns successfully" Aug 13 00:19:12.770167 systemd[1]: cri-containerd-5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964.scope: Deactivated successfully. Aug 13 00:19:12.817621 containerd[2021]: time="2025-08-13T00:19:12.817318460Z" level=info msg="shim disconnected" id=5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964 namespace=k8s.io Aug 13 00:19:12.817621 containerd[2021]: time="2025-08-13T00:19:12.817534520Z" level=warning msg="cleaning up after shim disconnected" id=5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964 namespace=k8s.io Aug 13 00:19:12.817621 containerd[2021]: time="2025-08-13T00:19:12.817562540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:13.595864 containerd[2021]: time="2025-08-13T00:19:13.595702076Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:19:13.628621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632282299.mount: Deactivated successfully. Aug 13 00:19:13.634519 containerd[2021]: time="2025-08-13T00:19:13.634315460Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\"" Aug 13 00:19:13.637978 containerd[2021]: time="2025-08-13T00:19:13.636239552Z" level=info msg="StartContainer for \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\"" Aug 13 00:19:13.708213 systemd[1]: Started cri-containerd-94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad.scope - libcontainer container 94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad. Aug 13 00:19:13.756064 systemd[1]: cri-containerd-94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad.scope: Deactivated successfully. Aug 13 00:19:13.766270 containerd[2021]: time="2025-08-13T00:19:13.766037624Z" level=info msg="StartContainer for \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\" returns successfully" Aug 13 00:19:13.821902 containerd[2021]: time="2025-08-13T00:19:13.821695569Z" level=info msg="shim disconnected" id=94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad namespace=k8s.io Aug 13 00:19:13.821902 containerd[2021]: time="2025-08-13T00:19:13.821820369Z" level=warning msg="cleaning up after shim disconnected" id=94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad namespace=k8s.io Aug 13 00:19:13.821902 containerd[2021]: time="2025-08-13T00:19:13.821858721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:14.537385 systemd[1]: run-containerd-runc-k8s.io-94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad-runc.9c5JK9.mount: Deactivated successfully. Aug 13 00:19:14.537802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad-rootfs.mount: Deactivated successfully. Aug 13 00:19:14.602511 containerd[2021]: time="2025-08-13T00:19:14.602041569Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:19:14.643404 containerd[2021]: time="2025-08-13T00:19:14.641294769Z" level=info msg="CreateContainer within sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\"" Aug 13 00:19:14.643404 containerd[2021]: time="2025-08-13T00:19:14.643196277Z" level=info msg="StartContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\"" Aug 13 00:19:14.702204 systemd[1]: Started cri-containerd-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5.scope - libcontainer container 861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5. Aug 13 00:19:14.756441 containerd[2021]: time="2025-08-13T00:19:14.756379113Z" level=info msg="StartContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" returns successfully" Aug 13 00:19:14.894326 kubelet[3219]: I0813 00:19:14.893363 3219 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:19:14.981208 systemd[1]: Created slice kubepods-burstable-poda19be6c8_4816_43ad_99d5_cac6d83ac994.slice - libcontainer container kubepods-burstable-poda19be6c8_4816_43ad_99d5_cac6d83ac994.slice. Aug 13 00:19:14.998973 systemd[1]: Created slice kubepods-burstable-podfd3f27ef_82a9_4cd5_bf44_195299cb633a.slice - libcontainer container kubepods-burstable-podfd3f27ef_82a9_4cd5_bf44_195299cb633a.slice. Aug 13 00:19:15.031419 kubelet[3219]: I0813 00:19:15.031226 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd3f27ef-82a9-4cd5-bf44-195299cb633a-config-volume\") pod \"coredns-674b8bbfcf-bdsrz\" (UID: \"fd3f27ef-82a9-4cd5-bf44-195299cb633a\") " pod="kube-system/coredns-674b8bbfcf-bdsrz" Aug 13 00:19:15.031419 kubelet[3219]: I0813 00:19:15.031314 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19be6c8-4816-43ad-99d5-cac6d83ac994-config-volume\") pod \"coredns-674b8bbfcf-rfrdx\" (UID: \"a19be6c8-4816-43ad-99d5-cac6d83ac994\") " pod="kube-system/coredns-674b8bbfcf-rfrdx" Aug 13 00:19:15.031419 kubelet[3219]: I0813 00:19:15.031362 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/a19be6c8-4816-43ad-99d5-cac6d83ac994-kube-api-access-ft77l\") pod \"coredns-674b8bbfcf-rfrdx\" (UID: \"a19be6c8-4816-43ad-99d5-cac6d83ac994\") " pod="kube-system/coredns-674b8bbfcf-rfrdx" Aug 13 00:19:15.031419 kubelet[3219]: I0813 00:19:15.031407 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdq4q\" (UniqueName: \"kubernetes.io/projected/fd3f27ef-82a9-4cd5-bf44-195299cb633a-kube-api-access-cdq4q\") pod \"coredns-674b8bbfcf-bdsrz\" (UID: \"fd3f27ef-82a9-4cd5-bf44-195299cb633a\") " pod="kube-system/coredns-674b8bbfcf-bdsrz" Aug 13 00:19:15.294331 containerd[2021]: time="2025-08-13T00:19:15.293256812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rfrdx,Uid:a19be6c8-4816-43ad-99d5-cac6d83ac994,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:15.310130 containerd[2021]: time="2025-08-13T00:19:15.310068980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdsrz,Uid:fd3f27ef-82a9-4cd5-bf44-195299cb633a,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:15.649856 kubelet[3219]: I0813 00:19:15.649177 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zx6wm" podStartSLOduration=6.608569151 podStartE2EDuration="17.649155298s" podCreationTimestamp="2025-08-13 00:18:58 +0000 UTC" firstStartedPulling="2025-08-13 00:18:59.472911449 +0000 UTC m=+6.380893676" lastFinishedPulling="2025-08-13 00:19:10.513497596 +0000 UTC m=+17.421479823" observedRunningTime="2025-08-13 00:19:15.648752902 +0000 UTC m=+22.556735129" watchObservedRunningTime="2025-08-13 00:19:15.649155298 +0000 UTC m=+22.557137513" Aug 13 00:19:17.745580 (udev-worker)[4176]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:17.746725 (udev-worker)[4235]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:17.748635 systemd-networkd[1929]: cilium_host: Link UP Aug 13 00:19:17.750987 systemd-networkd[1929]: cilium_net: Link UP Aug 13 00:19:17.750995 systemd-networkd[1929]: cilium_net: Gained carrier Aug 13 00:19:17.753474 systemd-networkd[1929]: cilium_host: Gained carrier Aug 13 00:19:17.916283 (udev-worker)[4249]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:17.928140 systemd-networkd[1929]: cilium_vxlan: Link UP Aug 13 00:19:17.928596 systemd-networkd[1929]: cilium_vxlan: Gained carrier Aug 13 00:19:17.946447 systemd-networkd[1929]: cilium_net: Gained IPv6LL Aug 13 00:19:18.330206 systemd-networkd[1929]: cilium_host: Gained IPv6LL Aug 13 00:19:18.451984 kernel: NET: Registered PF_ALG protocol family Aug 13 00:19:19.548032 systemd-networkd[1929]: cilium_vxlan: Gained IPv6LL Aug 13 00:19:19.798341 systemd-networkd[1929]: lxc_health: Link UP Aug 13 00:19:19.812342 systemd-networkd[1929]: lxc_health: Gained carrier Aug 13 00:19:20.409166 systemd-networkd[1929]: lxc090e04714841: Link UP Aug 13 00:19:20.417279 systemd-networkd[1929]: lxc286db7fcee6c: Link UP Aug 13 00:19:20.424037 kernel: eth0: renamed from tmp4f6e2 Aug 13 00:19:20.433033 kernel: eth0: renamed from tmpcb481 Aug 13 00:19:20.442677 systemd-networkd[1929]: lxc090e04714841: Gained carrier Aug 13 00:19:20.447009 (udev-worker)[4586]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:20.447720 systemd-networkd[1929]: lxc286db7fcee6c: Gained carrier Aug 13 00:19:21.108188 systemd[1]: run-containerd-runc-k8s.io-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5-runc.NV1w6z.mount: Deactivated successfully. Aug 13 00:19:21.594228 systemd-networkd[1929]: lxc_health: Gained IPv6LL Aug 13 00:19:21.595859 systemd-networkd[1929]: lxc286db7fcee6c: Gained IPv6LL Aug 13 00:19:21.914165 systemd-networkd[1929]: lxc090e04714841: Gained IPv6LL Aug 13 00:19:23.429599 systemd[1]: run-containerd-runc-k8s.io-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5-runc.csJMcE.mount: Deactivated successfully. Aug 13 00:19:24.810327 ntpd[1987]: Listen normally on 8 cilium_host 192.168.0.176:123 Aug 13 00:19:24.810463 ntpd[1987]: Listen normally on 9 cilium_net [fe80::5810:6cff:feed:bcb9%4]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 8 cilium_host 192.168.0.176:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 9 cilium_net [fe80::5810:6cff:feed:bcb9%4]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 10 cilium_host [fe80::98f0:91ff:fe28:eb18%5]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 11 cilium_vxlan [fe80::864:2ff:fe0a:39f5%6]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 12 lxc_health [fe80::640e:12ff:fe36:fc45%8]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 13 lxc090e04714841 [fe80::d4ec:22ff:fe52:4f9d%10]:123 Aug 13 00:19:24.810953 ntpd[1987]: 13 Aug 00:19:24 ntpd[1987]: Listen normally on 14 lxc286db7fcee6c [fe80::1867:4fff:fe8b:74d1%12]:123 Aug 13 00:19:24.810547 ntpd[1987]: Listen normally on 10 cilium_host [fe80::98f0:91ff:fe28:eb18%5]:123 Aug 13 00:19:24.810617 ntpd[1987]: Listen normally on 11 cilium_vxlan [fe80::864:2ff:fe0a:39f5%6]:123 Aug 13 00:19:24.810686 ntpd[1987]: Listen normally on 12 lxc_health [fe80::640e:12ff:fe36:fc45%8]:123 Aug 13 00:19:24.810759 ntpd[1987]: Listen normally on 13 lxc090e04714841 [fe80::d4ec:22ff:fe52:4f9d%10]:123 Aug 13 00:19:24.810826 ntpd[1987]: Listen normally on 14 lxc286db7fcee6c [fe80::1867:4fff:fe8b:74d1%12]:123 Aug 13 00:19:27.073587 sudo[2334]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:27.098956 sshd[2331]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:27.109313 systemd[1]: sshd@6-172.31.19.145:22-139.178.89.65:47376.service: Deactivated successfully. Aug 13 00:19:27.115360 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:19:27.116190 systemd[1]: session-7.scope: Consumed 12.701s CPU time, 155.7M memory peak, 0B memory swap peak. Aug 13 00:19:27.122159 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:19:27.124918 systemd-logind[1993]: Removed session 7. Aug 13 00:19:29.527281 containerd[2021]: time="2025-08-13T00:19:29.527068811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:29.531084 containerd[2021]: time="2025-08-13T00:19:29.528125183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:29.531084 containerd[2021]: time="2025-08-13T00:19:29.528302147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:29.531084 containerd[2021]: time="2025-08-13T00:19:29.528671291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:29.593181 containerd[2021]: time="2025-08-13T00:19:29.593024819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:29.593333 containerd[2021]: time="2025-08-13T00:19:29.593245475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:29.593397 containerd[2021]: time="2025-08-13T00:19:29.593335847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:29.595694 containerd[2021]: time="2025-08-13T00:19:29.594211883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:29.623255 systemd[1]: Started cri-containerd-4f6e22f0c69014e92a371af2474d87998a0ebdb63701389f76b8a66c037d5173.scope - libcontainer container 4f6e22f0c69014e92a371af2474d87998a0ebdb63701389f76b8a66c037d5173. Aug 13 00:19:29.663103 systemd[1]: Started cri-containerd-cb4811b3901685c1cc8b25f7f272621783783c3e6808289092ae0f9bcf2a120b.scope - libcontainer container cb4811b3901685c1cc8b25f7f272621783783c3e6808289092ae0f9bcf2a120b. Aug 13 00:19:29.793855 containerd[2021]: time="2025-08-13T00:19:29.793598112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rfrdx,Uid:a19be6c8-4816-43ad-99d5-cac6d83ac994,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f6e22f0c69014e92a371af2474d87998a0ebdb63701389f76b8a66c037d5173\"" Aug 13 00:19:29.815168 containerd[2021]: time="2025-08-13T00:19:29.815082096Z" level=info msg="CreateContainer within sandbox \"4f6e22f0c69014e92a371af2474d87998a0ebdb63701389f76b8a66c037d5173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:19:29.823144 containerd[2021]: time="2025-08-13T00:19:29.822914508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdsrz,Uid:fd3f27ef-82a9-4cd5-bf44-195299cb633a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb4811b3901685c1cc8b25f7f272621783783c3e6808289092ae0f9bcf2a120b\"" Aug 13 00:19:29.840704 containerd[2021]: time="2025-08-13T00:19:29.840616608Z" level=info msg="CreateContainer within sandbox \"cb4811b3901685c1cc8b25f7f272621783783c3e6808289092ae0f9bcf2a120b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:19:29.859539 containerd[2021]: time="2025-08-13T00:19:29.859452456Z" level=info msg="CreateContainer within sandbox \"4f6e22f0c69014e92a371af2474d87998a0ebdb63701389f76b8a66c037d5173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dda0591f5ed200897e3bba514fb00feac7b151c80ec274f6b2d6d6e704af4f5f\"" Aug 13 00:19:29.863933 containerd[2021]: time="2025-08-13T00:19:29.861125664Z" level=info msg="StartContainer for \"dda0591f5ed200897e3bba514fb00feac7b151c80ec274f6b2d6d6e704af4f5f\"" Aug 13 00:19:29.882084 containerd[2021]: time="2025-08-13T00:19:29.882018817Z" level=info msg="CreateContainer within sandbox \"cb4811b3901685c1cc8b25f7f272621783783c3e6808289092ae0f9bcf2a120b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78f6e0948cb304c52085c2f62d21d31dba9a22f2c5b73e441ac992a73ac84b4d\"" Aug 13 00:19:29.886397 containerd[2021]: time="2025-08-13T00:19:29.886320709Z" level=info msg="StartContainer for \"78f6e0948cb304c52085c2f62d21d31dba9a22f2c5b73e441ac992a73ac84b4d\"" Aug 13 00:19:29.937526 systemd[1]: Started cri-containerd-dda0591f5ed200897e3bba514fb00feac7b151c80ec274f6b2d6d6e704af4f5f.scope - libcontainer container dda0591f5ed200897e3bba514fb00feac7b151c80ec274f6b2d6d6e704af4f5f. Aug 13 00:19:29.976200 systemd[1]: Started cri-containerd-78f6e0948cb304c52085c2f62d21d31dba9a22f2c5b73e441ac992a73ac84b4d.scope - libcontainer container 78f6e0948cb304c52085c2f62d21d31dba9a22f2c5b73e441ac992a73ac84b4d. Aug 13 00:19:30.053578 containerd[2021]: time="2025-08-13T00:19:30.053418873Z" level=info msg="StartContainer for \"dda0591f5ed200897e3bba514fb00feac7b151c80ec274f6b2d6d6e704af4f5f\" returns successfully" Aug 13 00:19:30.091177 containerd[2021]: time="2025-08-13T00:19:30.091082614Z" level=info msg="StartContainer for \"78f6e0948cb304c52085c2f62d21d31dba9a22f2c5b73e441ac992a73ac84b4d\" returns successfully" Aug 13 00:19:30.727922 kubelet[3219]: I0813 00:19:30.725581 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bdsrz" podStartSLOduration=33.725556469 podStartE2EDuration="33.725556469s" podCreationTimestamp="2025-08-13 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:30.702543661 +0000 UTC m=+37.610525888" watchObservedRunningTime="2025-08-13 00:19:30.725556469 +0000 UTC m=+37.633538696" Aug 13 00:20:03.549439 systemd[1]: Started sshd@7-172.31.19.145:22-139.178.89.65:41672.service - OpenSSH per-connection server daemon (139.178.89.65:41672). Aug 13 00:20:03.730935 sshd[4911]: Accepted publickey for core from 139.178.89.65 port 41672 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:03.733652 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:03.743160 systemd-logind[1993]: New session 8 of user core. Aug 13 00:20:03.758187 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:20:04.047413 sshd[4911]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:04.053044 systemd[1]: sshd@7-172.31.19.145:22-139.178.89.65:41672.service: Deactivated successfully. Aug 13 00:20:04.053623 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:20:04.057931 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:20:04.063974 systemd-logind[1993]: Removed session 8. Aug 13 00:20:09.087478 systemd[1]: Started sshd@8-172.31.19.145:22-139.178.89.65:38586.service - OpenSSH per-connection server daemon (139.178.89.65:38586). Aug 13 00:20:09.255264 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 38586 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:09.258039 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:09.267047 systemd-logind[1993]: New session 9 of user core. Aug 13 00:20:09.275176 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:20:09.520275 sshd[4926]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:09.526411 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:20:09.529784 systemd[1]: sshd@8-172.31.19.145:22-139.178.89.65:38586.service: Deactivated successfully. Aug 13 00:20:09.534232 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:20:09.537605 systemd-logind[1993]: Removed session 9. Aug 13 00:20:14.568424 systemd[1]: Started sshd@9-172.31.19.145:22-139.178.89.65:38590.service - OpenSSH per-connection server daemon (139.178.89.65:38590). Aug 13 00:20:14.734820 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 38590 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:14.737579 sshd[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:14.747851 systemd-logind[1993]: New session 10 of user core. Aug 13 00:20:14.752265 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:20:14.992390 sshd[4941]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:14.998831 systemd[1]: sshd@9-172.31.19.145:22-139.178.89.65:38590.service: Deactivated successfully. Aug 13 00:20:15.003351 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:20:15.005630 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:20:15.008851 systemd-logind[1993]: Removed session 10. Aug 13 00:20:20.032420 systemd[1]: Started sshd@10-172.31.19.145:22-139.178.89.65:47258.service - OpenSSH per-connection server daemon (139.178.89.65:47258). Aug 13 00:20:20.209167 sshd[4956]: Accepted publickey for core from 139.178.89.65 port 47258 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:20.211819 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:20.219331 systemd-logind[1993]: New session 11 of user core. Aug 13 00:20:20.229145 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:20:20.474300 sshd[4956]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:20.481136 systemd[1]: sshd@10-172.31.19.145:22-139.178.89.65:47258.service: Deactivated successfully. Aug 13 00:20:20.485286 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:20:20.489420 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:20:20.491512 systemd-logind[1993]: Removed session 11. Aug 13 00:20:20.519434 systemd[1]: Started sshd@11-172.31.19.145:22-139.178.89.65:47264.service - OpenSSH per-connection server daemon (139.178.89.65:47264). Aug 13 00:20:20.689046 sshd[4970]: Accepted publickey for core from 139.178.89.65 port 47264 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:20.692862 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:20.701026 systemd-logind[1993]: New session 12 of user core. Aug 13 00:20:20.709161 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:20:21.021696 sshd[4970]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:21.031364 systemd[1]: sshd@11-172.31.19.145:22-139.178.89.65:47264.service: Deactivated successfully. Aug 13 00:20:21.043052 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:20:21.048695 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:20:21.074427 systemd[1]: Started sshd@12-172.31.19.145:22-139.178.89.65:47280.service - OpenSSH per-connection server daemon (139.178.89.65:47280). Aug 13 00:20:21.077938 systemd-logind[1993]: Removed session 12. Aug 13 00:20:21.259460 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 47280 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:21.262125 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:21.269996 systemd-logind[1993]: New session 13 of user core. Aug 13 00:20:21.280258 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:20:21.534842 sshd[4981]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:21.542503 systemd[1]: sshd@12-172.31.19.145:22-139.178.89.65:47280.service: Deactivated successfully. Aug 13 00:20:21.547822 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:20:21.550556 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:20:21.552354 systemd-logind[1993]: Removed session 13. Aug 13 00:20:26.574677 systemd[1]: Started sshd@13-172.31.19.145:22-139.178.89.65:47284.service - OpenSSH per-connection server daemon (139.178.89.65:47284). Aug 13 00:20:26.758436 sshd[4994]: Accepted publickey for core from 139.178.89.65 port 47284 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:26.761368 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:26.769152 systemd-logind[1993]: New session 14 of user core. Aug 13 00:20:26.783209 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:20:27.022602 sshd[4994]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:27.028948 systemd[1]: sshd@13-172.31.19.145:22-139.178.89.65:47284.service: Deactivated successfully. Aug 13 00:20:27.032837 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:20:27.036219 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:20:27.038377 systemd-logind[1993]: Removed session 14. Aug 13 00:20:32.062428 systemd[1]: Started sshd@14-172.31.19.145:22-139.178.89.65:57376.service - OpenSSH per-connection server daemon (139.178.89.65:57376). Aug 13 00:20:32.244014 sshd[5009]: Accepted publickey for core from 139.178.89.65 port 57376 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:32.248303 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:32.256987 systemd-logind[1993]: New session 15 of user core. Aug 13 00:20:32.263148 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:20:32.508938 sshd[5009]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:32.515043 systemd[1]: sshd@14-172.31.19.145:22-139.178.89.65:57376.service: Deactivated successfully. Aug 13 00:20:32.521128 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:20:32.522739 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:20:32.524726 systemd-logind[1993]: Removed session 15. Aug 13 00:20:37.549406 systemd[1]: Started sshd@15-172.31.19.145:22-139.178.89.65:57388.service - OpenSSH per-connection server daemon (139.178.89.65:57388). Aug 13 00:20:37.721639 sshd[5022]: Accepted publickey for core from 139.178.89.65 port 57388 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:37.724287 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:37.731872 systemd-logind[1993]: New session 16 of user core. Aug 13 00:20:37.741166 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:20:37.988873 sshd[5022]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:37.995221 systemd[1]: sshd@15-172.31.19.145:22-139.178.89.65:57388.service: Deactivated successfully. Aug 13 00:20:37.998519 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:20:38.000195 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:20:38.003461 systemd-logind[1993]: Removed session 16. Aug 13 00:20:43.032427 systemd[1]: Started sshd@16-172.31.19.145:22-139.178.89.65:56130.service - OpenSSH per-connection server daemon (139.178.89.65:56130). Aug 13 00:20:43.210207 sshd[5035]: Accepted publickey for core from 139.178.89.65 port 56130 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:43.212961 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:43.221448 systemd-logind[1993]: New session 17 of user core. Aug 13 00:20:43.228161 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:20:43.473347 sshd[5035]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:43.479195 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:20:43.480484 systemd[1]: sshd@16-172.31.19.145:22-139.178.89.65:56130.service: Deactivated successfully. Aug 13 00:20:43.485601 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:20:43.490829 systemd-logind[1993]: Removed session 17. Aug 13 00:20:43.513530 systemd[1]: Started sshd@17-172.31.19.145:22-139.178.89.65:56132.service - OpenSSH per-connection server daemon (139.178.89.65:56132). Aug 13 00:20:43.693550 sshd[5048]: Accepted publickey for core from 139.178.89.65 port 56132 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:43.696445 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:43.704220 systemd-logind[1993]: New session 18 of user core. Aug 13 00:20:43.713178 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:20:44.063735 sshd[5048]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:44.074822 systemd[1]: sshd@17-172.31.19.145:22-139.178.89.65:56132.service: Deactivated successfully. Aug 13 00:20:44.079156 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:20:44.081751 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:20:44.084322 systemd-logind[1993]: Removed session 18. Aug 13 00:20:44.100420 systemd[1]: Started sshd@18-172.31.19.145:22-139.178.89.65:56140.service - OpenSSH per-connection server daemon (139.178.89.65:56140). Aug 13 00:20:44.282547 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 56140 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:44.285910 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:44.297576 systemd-logind[1993]: New session 19 of user core. Aug 13 00:20:44.303194 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:20:45.399303 sshd[5058]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:45.409813 systemd[1]: sshd@18-172.31.19.145:22-139.178.89.65:56140.service: Deactivated successfully. Aug 13 00:20:45.418116 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:20:45.427608 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:20:45.453611 systemd[1]: Started sshd@19-172.31.19.145:22-139.178.89.65:56146.service - OpenSSH per-connection server daemon (139.178.89.65:56146). Aug 13 00:20:45.457751 systemd-logind[1993]: Removed session 19. Aug 13 00:20:45.642989 sshd[5075]: Accepted publickey for core from 139.178.89.65 port 56146 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:45.645771 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:45.655410 systemd-logind[1993]: New session 20 of user core. Aug 13 00:20:45.660180 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:20:46.174868 sshd[5075]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:46.185502 systemd[1]: sshd@19-172.31.19.145:22-139.178.89.65:56146.service: Deactivated successfully. Aug 13 00:20:46.189190 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:20:46.191637 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:20:46.194287 systemd-logind[1993]: Removed session 20. Aug 13 00:20:46.214486 systemd[1]: Started sshd@20-172.31.19.145:22-139.178.89.65:56152.service - OpenSSH per-connection server daemon (139.178.89.65:56152). Aug 13 00:20:46.396687 sshd[5087]: Accepted publickey for core from 139.178.89.65 port 56152 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:46.400424 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:46.409354 systemd-logind[1993]: New session 21 of user core. Aug 13 00:20:46.417187 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:20:46.651149 sshd[5087]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:46.657707 systemd[1]: sshd@20-172.31.19.145:22-139.178.89.65:56152.service: Deactivated successfully. Aug 13 00:20:46.661140 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:20:46.663558 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:20:46.666845 systemd-logind[1993]: Removed session 21. Aug 13 00:20:51.697428 systemd[1]: Started sshd@21-172.31.19.145:22-139.178.89.65:39570.service - OpenSSH per-connection server daemon (139.178.89.65:39570). Aug 13 00:20:51.868101 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 39570 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:51.871014 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:51.880039 systemd-logind[1993]: New session 22 of user core. Aug 13 00:20:51.890368 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:20:52.127252 sshd[5100]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:52.133219 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:20:52.134656 systemd[1]: sshd@21-172.31.19.145:22-139.178.89.65:39570.service: Deactivated successfully. Aug 13 00:20:52.139607 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:20:52.143098 systemd-logind[1993]: Removed session 22. Aug 13 00:20:57.169480 systemd[1]: Started sshd@22-172.31.19.145:22-139.178.89.65:39584.service - OpenSSH per-connection server daemon (139.178.89.65:39584). Aug 13 00:20:57.351849 sshd[5117]: Accepted publickey for core from 139.178.89.65 port 39584 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:57.355314 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:57.365210 systemd-logind[1993]: New session 23 of user core. Aug 13 00:20:57.374214 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:20:57.618130 sshd[5117]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:57.624324 systemd[1]: sshd@22-172.31.19.145:22-139.178.89.65:39584.service: Deactivated successfully. Aug 13 00:20:57.629219 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:20:57.630720 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:20:57.633922 systemd-logind[1993]: Removed session 23. Aug 13 00:21:02.662456 systemd[1]: Started sshd@23-172.31.19.145:22-139.178.89.65:55648.service - OpenSSH per-connection server daemon (139.178.89.65:55648). Aug 13 00:21:02.826400 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 55648 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:02.829157 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:02.837217 systemd-logind[1993]: New session 24 of user core. Aug 13 00:21:02.846177 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:21:03.084277 sshd[5133]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:03.089550 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:21:03.090850 systemd[1]: sshd@23-172.31.19.145:22-139.178.89.65:55648.service: Deactivated successfully. Aug 13 00:21:03.094958 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:21:03.099684 systemd-logind[1993]: Removed session 24. Aug 13 00:21:03.122431 systemd[1]: Started sshd@24-172.31.19.145:22-139.178.89.65:55652.service - OpenSSH per-connection server daemon (139.178.89.65:55652). Aug 13 00:21:03.298454 sshd[5146]: Accepted publickey for core from 139.178.89.65 port 55652 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:03.301198 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:03.310026 systemd-logind[1993]: New session 25 of user core. Aug 13 00:21:03.317240 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:21:05.495821 kubelet[3219]: I0813 00:21:05.495300 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rfrdx" podStartSLOduration=128.495277147 podStartE2EDuration="2m8.495277147s" podCreationTimestamp="2025-08-13 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:30.756969469 +0000 UTC m=+37.664951732" watchObservedRunningTime="2025-08-13 00:21:05.495277147 +0000 UTC m=+132.403259386" Aug 13 00:21:05.551455 systemd[1]: run-containerd-runc-k8s.io-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5-runc.blf475.mount: Deactivated successfully. Aug 13 00:21:05.559830 containerd[2021]: time="2025-08-13T00:21:05.559776812Z" level=info msg="StopContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" with timeout 30 (s)" Aug 13 00:21:05.562553 containerd[2021]: time="2025-08-13T00:21:05.562108136Z" level=info msg="Stop container \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" with signal terminated" Aug 13 00:21:05.583436 containerd[2021]: time="2025-08-13T00:21:05.583310684Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:21:05.589707 systemd[1]: cri-containerd-ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2.scope: Deactivated successfully. Aug 13 00:21:05.607443 containerd[2021]: time="2025-08-13T00:21:05.607372172Z" level=info msg="StopContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" with timeout 2 (s)" Aug 13 00:21:05.608759 containerd[2021]: time="2025-08-13T00:21:05.608554172Z" level=info msg="Stop container \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" with signal terminated" Aug 13 00:21:05.624212 systemd-networkd[1929]: lxc_health: Link DOWN Aug 13 00:21:05.624234 systemd-networkd[1929]: lxc_health: Lost carrier Aug 13 00:21:05.660820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2-rootfs.mount: Deactivated successfully. Aug 13 00:21:05.668905 systemd[1]: cri-containerd-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5.scope: Deactivated successfully. Aug 13 00:21:05.669429 systemd[1]: cri-containerd-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5.scope: Consumed 15.291s CPU time. Aug 13 00:21:05.680398 containerd[2021]: time="2025-08-13T00:21:05.680074676Z" level=info msg="shim disconnected" id=ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2 namespace=k8s.io Aug 13 00:21:05.680398 containerd[2021]: time="2025-08-13T00:21:05.680153684Z" level=warning msg="cleaning up after shim disconnected" id=ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2 namespace=k8s.io Aug 13 00:21:05.680398 containerd[2021]: time="2025-08-13T00:21:05.680180444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:05.716629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5-rootfs.mount: Deactivated successfully. Aug 13 00:21:05.721208 containerd[2021]: time="2025-08-13T00:21:05.721099569Z" level=info msg="shim disconnected" id=861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5 namespace=k8s.io Aug 13 00:21:05.721208 containerd[2021]: time="2025-08-13T00:21:05.721201749Z" level=warning msg="cleaning up after shim disconnected" id=861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5 namespace=k8s.io Aug 13 00:21:05.721602 containerd[2021]: time="2025-08-13T00:21:05.721224909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:05.727863 containerd[2021]: time="2025-08-13T00:21:05.727798581Z" level=info msg="StopContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" returns successfully" Aug 13 00:21:05.729030 containerd[2021]: time="2025-08-13T00:21:05.728971149Z" level=info msg="StopPodSandbox for \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\"" Aug 13 00:21:05.731028 containerd[2021]: time="2025-08-13T00:21:05.729048225Z" level=info msg="Container to stop \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.738142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05-shm.mount: Deactivated successfully. Aug 13 00:21:05.751693 systemd[1]: cri-containerd-edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05.scope: Deactivated successfully. Aug 13 00:21:05.757569 containerd[2021]: time="2025-08-13T00:21:05.757422981Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:21:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:21:05.764506 containerd[2021]: time="2025-08-13T00:21:05.764355009Z" level=info msg="StopContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" returns successfully" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765533361Z" level=info msg="StopPodSandbox for \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\"" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765595593Z" level=info msg="Container to stop \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765621801Z" level=info msg="Container to stop \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765645165Z" level=info msg="Container to stop \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765667017Z" level=info msg="Container to stop \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.765809 containerd[2021]: time="2025-08-13T00:21:05.765688437Z" level=info msg="Container to stop \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:21:05.788355 systemd[1]: cri-containerd-03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590.scope: Deactivated successfully. Aug 13 00:21:05.819001 containerd[2021]: time="2025-08-13T00:21:05.818836725Z" level=info msg="shim disconnected" id=edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05 namespace=k8s.io Aug 13 00:21:05.819001 containerd[2021]: time="2025-08-13T00:21:05.818972193Z" level=warning msg="cleaning up after shim disconnected" id=edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05 namespace=k8s.io Aug 13 00:21:05.819001 containerd[2021]: time="2025-08-13T00:21:05.818994861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:05.844012 containerd[2021]: time="2025-08-13T00:21:05.843768045Z" level=info msg="shim disconnected" id=03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590 namespace=k8s.io Aug 13 00:21:05.844012 containerd[2021]: time="2025-08-13T00:21:05.843873969Z" level=warning msg="cleaning up after shim disconnected" id=03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590 namespace=k8s.io Aug 13 00:21:05.844012 containerd[2021]: time="2025-08-13T00:21:05.843948345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:05.854359 containerd[2021]: time="2025-08-13T00:21:05.853783497Z" level=info msg="TearDown network for sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" successfully" Aug 13 00:21:05.854359 containerd[2021]: time="2025-08-13T00:21:05.853845945Z" level=info msg="StopPodSandbox for \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" returns successfully" Aug 13 00:21:05.883572 containerd[2021]: time="2025-08-13T00:21:05.883497501Z" level=info msg="TearDown network for sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" successfully" Aug 13 00:21:05.883572 containerd[2021]: time="2025-08-13T00:21:05.883571457Z" level=info msg="StopPodSandbox for \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" returns successfully" Aug 13 00:21:05.934943 kubelet[3219]: I0813 00:21:05.934735 3219 scope.go:117] "RemoveContainer" containerID="ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2" Aug 13 00:21:05.941689 containerd[2021]: time="2025-08-13T00:21:05.941237362Z" level=info msg="RemoveContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\"" Aug 13 00:21:05.952521 containerd[2021]: time="2025-08-13T00:21:05.952449754Z" level=info msg="RemoveContainer for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" returns successfully" Aug 13 00:21:05.952949 kubelet[3219]: I0813 00:21:05.952857 3219 scope.go:117] "RemoveContainer" containerID="ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2" Aug 13 00:21:05.953459 containerd[2021]: time="2025-08-13T00:21:05.953311762Z" level=error msg="ContainerStatus for \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\": not found" Aug 13 00:21:05.954015 kubelet[3219]: E0813 00:21:05.953805 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\": not found" containerID="ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2" Aug 13 00:21:05.954249 kubelet[3219]: I0813 00:21:05.953945 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2"} err="failed to get container status \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca902c0aaeb2aebc3bb7dda1e8a7b8c3133e68713fdb64dd156116a07c7b3ce2\": not found" Aug 13 00:21:05.954249 kubelet[3219]: I0813 00:21:05.954138 3219 scope.go:117] "RemoveContainer" containerID="861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5" Aug 13 00:21:05.956695 containerd[2021]: time="2025-08-13T00:21:05.956641222Z" level=info msg="RemoveContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\"" Aug 13 00:21:05.958958 kubelet[3219]: I0813 00:21:05.958872 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad5613d6-bb3a-4b38-851e-500e0b9f338e-cilium-config-path\") pod \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\" (UID: \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\") " Aug 13 00:21:05.958958 kubelet[3219]: I0813 00:21:05.958978 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmrsd\" (UniqueName: \"kubernetes.io/projected/ad5613d6-bb3a-4b38-851e-500e0b9f338e-kube-api-access-tmrsd\") pod \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\" (UID: \"ad5613d6-bb3a-4b38-851e-500e0b9f338e\") " Aug 13 00:21:05.965635 containerd[2021]: time="2025-08-13T00:21:05.965381302Z" level=info msg="RemoveContainer for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" returns successfully" Aug 13 00:21:05.966974 kubelet[3219]: I0813 00:21:05.966631 3219 scope.go:117] "RemoveContainer" containerID="94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad" Aug 13 00:21:05.966974 kubelet[3219]: I0813 00:21:05.966841 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5613d6-bb3a-4b38-851e-500e0b9f338e-kube-api-access-tmrsd" (OuterVolumeSpecName: "kube-api-access-tmrsd") pod "ad5613d6-bb3a-4b38-851e-500e0b9f338e" (UID: "ad5613d6-bb3a-4b38-851e-500e0b9f338e"). InnerVolumeSpecName "kube-api-access-tmrsd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:21:05.969465 containerd[2021]: time="2025-08-13T00:21:05.969340030Z" level=info msg="RemoveContainer for \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\"" Aug 13 00:21:05.971036 kubelet[3219]: I0813 00:21:05.970975 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5613d6-bb3a-4b38-851e-500e0b9f338e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad5613d6-bb3a-4b38-851e-500e0b9f338e" (UID: "ad5613d6-bb3a-4b38-851e-500e0b9f338e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:21:05.976496 containerd[2021]: time="2025-08-13T00:21:05.976360846Z" level=info msg="RemoveContainer for \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\" returns successfully" Aug 13 00:21:05.976745 kubelet[3219]: I0813 00:21:05.976690 3219 scope.go:117] "RemoveContainer" containerID="5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964" Aug 13 00:21:05.978778 containerd[2021]: time="2025-08-13T00:21:05.978588202Z" level=info msg="RemoveContainer for \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\"" Aug 13 00:21:05.984698 containerd[2021]: time="2025-08-13T00:21:05.984600946Z" level=info msg="RemoveContainer for \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\" returns successfully" Aug 13 00:21:05.985190 kubelet[3219]: I0813 00:21:05.985037 3219 scope.go:117] "RemoveContainer" containerID="3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50" Aug 13 00:21:05.987711 containerd[2021]: time="2025-08-13T00:21:05.987317290Z" level=info msg="RemoveContainer for \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\"" Aug 13 00:21:05.993389 containerd[2021]: time="2025-08-13T00:21:05.993250606Z" level=info msg="RemoveContainer for \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\" returns successfully" Aug 13 00:21:05.993863 kubelet[3219]: I0813 00:21:05.993587 3219 scope.go:117] "RemoveContainer" containerID="f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a" Aug 13 00:21:05.995938 containerd[2021]: time="2025-08-13T00:21:05.995648038Z" level=info msg="RemoveContainer for \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\"" Aug 13 00:21:06.004192 containerd[2021]: time="2025-08-13T00:21:06.002596122Z" level=info msg="RemoveContainer for \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\" returns successfully" Aug 13 00:21:06.007614 kubelet[3219]: I0813 00:21:06.007421 3219 scope.go:117] "RemoveContainer" containerID="861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5" Aug 13 00:21:06.010561 containerd[2021]: time="2025-08-13T00:21:06.010469682Z" level=error msg="ContainerStatus for \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\": not found" Aug 13 00:21:06.011042 kubelet[3219]: E0813 00:21:06.011005 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\": not found" containerID="861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5" Aug 13 00:21:06.011232 kubelet[3219]: I0813 00:21:06.011182 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5"} err="failed to get container status \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"861a83622becc26dd59b6bcc07df7f7c908fe53076b28ab1af64fe0e957808e5\": not found" Aug 13 00:21:06.011561 kubelet[3219]: I0813 00:21:06.011411 3219 scope.go:117] "RemoveContainer" containerID="94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad" Aug 13 00:21:06.011985 containerd[2021]: time="2025-08-13T00:21:06.011841474Z" level=error msg="ContainerStatus for \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\": not found" Aug 13 00:21:06.012173 kubelet[3219]: E0813 00:21:06.012109 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\": not found" containerID="94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad" Aug 13 00:21:06.012269 kubelet[3219]: I0813 00:21:06.012172 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad"} err="failed to get container status \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"94860fc77f26dfdaaea108ee2008d74dc71f661a2ab6c6b71cac6a386ff396ad\": not found" Aug 13 00:21:06.012269 kubelet[3219]: I0813 00:21:06.012208 3219 scope.go:117] "RemoveContainer" containerID="5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964" Aug 13 00:21:06.012574 containerd[2021]: time="2025-08-13T00:21:06.012514482Z" level=error msg="ContainerStatus for \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\": not found" Aug 13 00:21:06.012867 kubelet[3219]: E0813 00:21:06.012825 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\": not found" containerID="5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964" Aug 13 00:21:06.012983 kubelet[3219]: I0813 00:21:06.012905 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964"} err="failed to get container status \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\": rpc error: code = NotFound desc = an error occurred when try to find container \"5990a9e9ee25173f048c5238a971ed19d745f378c94a86da1d82c11ba9aca964\": not found" Aug 13 00:21:06.012983 kubelet[3219]: I0813 00:21:06.012941 3219 scope.go:117] "RemoveContainer" containerID="3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50" Aug 13 00:21:06.013431 containerd[2021]: time="2025-08-13T00:21:06.013341378Z" level=error msg="ContainerStatus for \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\": not found" Aug 13 00:21:06.013835 kubelet[3219]: E0813 00:21:06.013702 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\": not found" containerID="3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50" Aug 13 00:21:06.013835 kubelet[3219]: I0813 00:21:06.013769 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50"} err="failed to get container status \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e2e38eb07997712122e89fc979726e1cbf8a88b9ab234a4d0ba9b8a07bb6d50\": not found" Aug 13 00:21:06.013835 kubelet[3219]: I0813 00:21:06.013801 3219 scope.go:117] "RemoveContainer" containerID="f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a" Aug 13 00:21:06.014683 containerd[2021]: time="2025-08-13T00:21:06.014413194Z" level=error msg="ContainerStatus for \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\": not found" Aug 13 00:21:06.014790 kubelet[3219]: E0813 00:21:06.014706 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\": not found" containerID="f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a" Aug 13 00:21:06.014790 kubelet[3219]: I0813 00:21:06.014751 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a"} err="failed to get container status \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f89b2ff6f8f163e4c641e71051fca78551240191a3625c41b9400d7174f7a32a\": not found" Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060084 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-xtables-lock\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060191 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be58da57-708a-440a-9d4d-10badcc9f077-cilium-config-path\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060215 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060242 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-bpf-maps\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060274 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cni-path\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.061912 kubelet[3219]: I0813 00:21:06.060307 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-cgroup\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060343 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-hubble-tls\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060384 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-kernel\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060419 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-run\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060460 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgqbp\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-kube-api-access-fgqbp\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060492 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-net\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062312 kubelet[3219]: I0813 00:21:06.060522 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-lib-modules\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060562 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be58da57-708a-440a-9d4d-10badcc9f077-clustermesh-secrets\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060599 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-hostproc\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060633 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-etc-cni-netd\") pod \"be58da57-708a-440a-9d4d-10badcc9f077\" (UID: \"be58da57-708a-440a-9d4d-10badcc9f077\") " Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060698 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tmrsd\" (UniqueName: \"kubernetes.io/projected/ad5613d6-bb3a-4b38-851e-500e0b9f338e-kube-api-access-tmrsd\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060725 3219 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-xtables-lock\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.062665 kubelet[3219]: I0813 00:21:06.060748 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad5613d6-bb3a-4b38-851e-500e0b9f338e-cilium-config-path\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.063005 kubelet[3219]: I0813 00:21:06.060790 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063005 kubelet[3219]: I0813 00:21:06.060831 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063005 kubelet[3219]: I0813 00:21:06.060868 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cni-path" (OuterVolumeSpecName: "cni-path") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063005 kubelet[3219]: I0813 00:21:06.060969 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063498 kubelet[3219]: I0813 00:21:06.063444 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063498 kubelet[3219]: I0813 00:21:06.063435 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063723 kubelet[3219]: I0813 00:21:06.063691 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.063942 kubelet[3219]: I0813 00:21:06.063864 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.065271 kubelet[3219]: I0813 00:21:06.065201 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-hostproc" (OuterVolumeSpecName: "hostproc") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:21:06.069446 kubelet[3219]: I0813 00:21:06.069275 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:21:06.070231 kubelet[3219]: I0813 00:21:06.070145 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-kube-api-access-fgqbp" (OuterVolumeSpecName: "kube-api-access-fgqbp") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "kube-api-access-fgqbp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:21:06.073806 kubelet[3219]: I0813 00:21:06.073660 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be58da57-708a-440a-9d4d-10badcc9f077-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:21:06.075097 kubelet[3219]: I0813 00:21:06.074997 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be58da57-708a-440a-9d4d-10badcc9f077-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be58da57-708a-440a-9d4d-10badcc9f077" (UID: "be58da57-708a-440a-9d4d-10badcc9f077"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161418 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-net\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161470 3219 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-lib-modules\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161493 3219 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be58da57-708a-440a-9d4d-10badcc9f077-clustermesh-secrets\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161519 3219 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-hostproc\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161541 3219 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-etc-cni-netd\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161561 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be58da57-708a-440a-9d4d-10badcc9f077-cilium-config-path\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161582 3219 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-bpf-maps\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.161756 kubelet[3219]: I0813 00:21:06.161603 3219 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cni-path\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.162316 kubelet[3219]: I0813 00:21:06.161622 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-cgroup\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.162316 kubelet[3219]: I0813 00:21:06.161643 3219 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-hubble-tls\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.162316 kubelet[3219]: I0813 00:21:06.161663 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-host-proc-sys-kernel\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.162316 kubelet[3219]: I0813 00:21:06.161687 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be58da57-708a-440a-9d4d-10badcc9f077-cilium-run\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.162316 kubelet[3219]: I0813 00:21:06.161707 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fgqbp\" (UniqueName: \"kubernetes.io/projected/be58da57-708a-440a-9d4d-10badcc9f077-kube-api-access-fgqbp\") on node \"ip-172-31-19-145\" DevicePath \"\"" Aug 13 00:21:06.245781 systemd[1]: Removed slice kubepods-besteffort-podad5613d6_bb3a_4b38_851e_500e0b9f338e.slice - libcontainer container kubepods-besteffort-podad5613d6_bb3a_4b38_851e_500e0b9f338e.slice. Aug 13 00:21:06.266292 systemd[1]: Removed slice kubepods-burstable-podbe58da57_708a_440a_9d4d_10badcc9f077.slice - libcontainer container kubepods-burstable-podbe58da57_708a_440a_9d4d_10badcc9f077.slice. Aug 13 00:21:06.266546 systemd[1]: kubepods-burstable-podbe58da57_708a_440a_9d4d_10badcc9f077.slice: Consumed 15.446s CPU time. Aug 13 00:21:06.533057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590-rootfs.mount: Deactivated successfully. Aug 13 00:21:06.533241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590-shm.mount: Deactivated successfully. Aug 13 00:21:06.533379 systemd[1]: var-lib-kubelet-pods-be58da57\x2d708a\x2d440a\x2d9d4d\x2d10badcc9f077-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:21:06.533548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05-rootfs.mount: Deactivated successfully. Aug 13 00:21:06.533685 systemd[1]: var-lib-kubelet-pods-be58da57\x2d708a\x2d440a\x2d9d4d\x2d10badcc9f077-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfgqbp.mount: Deactivated successfully. Aug 13 00:21:06.533823 systemd[1]: var-lib-kubelet-pods-ad5613d6\x2dbb3a\x2d4b38\x2d851e\x2d500e0b9f338e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtmrsd.mount: Deactivated successfully. Aug 13 00:21:06.533990 systemd[1]: var-lib-kubelet-pods-be58da57\x2d708a\x2d440a\x2d9d4d\x2d10badcc9f077-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:21:07.441282 kubelet[3219]: I0813 00:21:07.441206 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5613d6-bb3a-4b38-851e-500e0b9f338e" path="/var/lib/kubelet/pods/ad5613d6-bb3a-4b38-851e-500e0b9f338e/volumes" Aug 13 00:21:07.442609 kubelet[3219]: I0813 00:21:07.442287 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be58da57-708a-440a-9d4d-10badcc9f077" path="/var/lib/kubelet/pods/be58da57-708a-440a-9d4d-10badcc9f077/volumes" Aug 13 00:21:07.459135 sshd[5146]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:07.464653 systemd[1]: sshd@24-172.31.19.145:22-139.178.89.65:55652.service: Deactivated successfully. Aug 13 00:21:07.469384 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:21:07.470020 systemd[1]: session-25.scope: Consumed 1.442s CPU time. Aug 13 00:21:07.473454 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:21:07.475799 systemd-logind[1993]: Removed session 25. Aug 13 00:21:07.495434 systemd[1]: Started sshd@25-172.31.19.145:22-139.178.89.65:55662.service - OpenSSH per-connection server daemon (139.178.89.65:55662). Aug 13 00:21:07.678464 sshd[5304]: Accepted publickey for core from 139.178.89.65 port 55662 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:07.681189 sshd[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:07.689553 systemd-logind[1993]: New session 26 of user core. Aug 13 00:21:07.700164 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:21:07.810162 ntpd[1987]: Deleting interface #12 lxc_health, fe80::640e:12ff:fe36:fc45%8#123, interface stats: received=0, sent=0, dropped=0, active_time=103 secs Aug 13 00:21:07.810676 ntpd[1987]: 13 Aug 00:21:07 ntpd[1987]: Deleting interface #12 lxc_health, fe80::640e:12ff:fe36:fc45%8#123, interface stats: received=0, sent=0, dropped=0, active_time=103 secs Aug 13 00:21:08.694103 kubelet[3219]: E0813 00:21:08.693663 3219 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:21:09.739186 sshd[5304]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:09.750814 systemd[1]: sshd@25-172.31.19.145:22-139.178.89.65:55662.service: Deactivated successfully. Aug 13 00:21:09.759617 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:21:09.760831 systemd[1]: session-26.scope: Consumed 1.833s CPU time. Aug 13 00:21:09.762968 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:21:09.791766 systemd-logind[1993]: Removed session 26. Aug 13 00:21:09.799781 systemd[1]: Started sshd@26-172.31.19.145:22-139.178.89.65:47098.service - OpenSSH per-connection server daemon (139.178.89.65:47098). Aug 13 00:21:09.823853 systemd[1]: Created slice kubepods-burstable-podc928ee88_4711_422e_a3c2_376dfb53bd8c.slice - libcontainer container kubepods-burstable-podc928ee88_4711_422e_a3c2_376dfb53bd8c.slice. Aug 13 00:21:09.889246 kubelet[3219]: I0813 00:21:09.889129 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-cilium-cgroup\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.889246 kubelet[3219]: I0813 00:21:09.889203 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c928ee88-4711-422e-a3c2-376dfb53bd8c-cilium-config-path\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.889246 kubelet[3219]: I0813 00:21:09.889247 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-etc-cni-netd\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.890057 kubelet[3219]: I0813 00:21:09.889282 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-lib-modules\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.890057 kubelet[3219]: I0813 00:21:09.889321 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c928ee88-4711-422e-a3c2-376dfb53bd8c-clustermesh-secrets\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.890568 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-host-proc-sys-net\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.890749 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4ws\" (UniqueName: \"kubernetes.io/projected/c928ee88-4711-422e-a3c2-376dfb53bd8c-kube-api-access-st4ws\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.890875 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-hostproc\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.891237 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-xtables-lock\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.891286 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c928ee88-4711-422e-a3c2-376dfb53bd8c-hubble-tls\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.891800 kubelet[3219]: I0813 00:21:09.891361 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-bpf-maps\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.894551 kubelet[3219]: I0813 00:21:09.891410 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-host-proc-sys-kernel\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.894551 kubelet[3219]: I0813 00:21:09.891473 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-cilium-run\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.894551 kubelet[3219]: I0813 00:21:09.891667 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c928ee88-4711-422e-a3c2-376dfb53bd8c-cni-path\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.894980 kubelet[3219]: I0813 00:21:09.894791 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c928ee88-4711-422e-a3c2-376dfb53bd8c-cilium-ipsec-secrets\") pod \"cilium-ssxvj\" (UID: \"c928ee88-4711-422e-a3c2-376dfb53bd8c\") " pod="kube-system/cilium-ssxvj" Aug 13 00:21:09.994088 sshd[5315]: Accepted publickey for core from 139.178.89.65 port 47098 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:09.998020 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:10.020292 systemd-logind[1993]: New session 27 of user core. Aug 13 00:21:10.026167 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:21:10.137448 containerd[2021]: time="2025-08-13T00:21:10.137343286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ssxvj,Uid:c928ee88-4711-422e-a3c2-376dfb53bd8c,Namespace:kube-system,Attempt:0,}" Aug 13 00:21:10.181237 containerd[2021]: time="2025-08-13T00:21:10.181093499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:10.181921 containerd[2021]: time="2025-08-13T00:21:10.181798463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:10.182046 containerd[2021]: time="2025-08-13T00:21:10.181946687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:10.182286 containerd[2021]: time="2025-08-13T00:21:10.182201267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:10.191187 sshd[5315]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:10.220363 systemd[1]: sshd@26-172.31.19.145:22-139.178.89.65:47098.service: Deactivated successfully. Aug 13 00:21:10.224856 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:21:10.228337 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:21:10.240429 systemd[1]: Started cri-containerd-be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae.scope - libcontainer container be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae. Aug 13 00:21:10.251326 systemd[1]: Started sshd@27-172.31.19.145:22-139.178.89.65:47114.service - OpenSSH per-connection server daemon (139.178.89.65:47114). Aug 13 00:21:10.255093 systemd-logind[1993]: Removed session 27. Aug 13 00:21:10.320123 containerd[2021]: time="2025-08-13T00:21:10.320049623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ssxvj,Uid:c928ee88-4711-422e-a3c2-376dfb53bd8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\"" Aug 13 00:21:10.333243 containerd[2021]: time="2025-08-13T00:21:10.333183047Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:21:10.357372 containerd[2021]: time="2025-08-13T00:21:10.357230700Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c\"" Aug 13 00:21:10.358249 containerd[2021]: time="2025-08-13T00:21:10.358142664Z" level=info msg="StartContainer for \"c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c\"" Aug 13 00:21:10.401535 systemd[1]: Started cri-containerd-c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c.scope - libcontainer container c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c. Aug 13 00:21:10.453375 containerd[2021]: time="2025-08-13T00:21:10.453158592Z" level=info msg="StartContainer for \"c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c\" returns successfully" Aug 13 00:21:10.462195 sshd[5354]: Accepted publickey for core from 139.178.89.65 port 47114 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:10.467342 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:10.480338 systemd[1]: cri-containerd-c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c.scope: Deactivated successfully. Aug 13 00:21:10.483007 systemd-logind[1993]: New session 28 of user core. Aug 13 00:21:10.488606 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:21:10.539992 containerd[2021]: time="2025-08-13T00:21:10.539242800Z" level=info msg="shim disconnected" id=c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c namespace=k8s.io Aug 13 00:21:10.539992 containerd[2021]: time="2025-08-13T00:21:10.539321388Z" level=warning msg="cleaning up after shim disconnected" id=c91b3deacb3ea77739fc9e8f639df3a3e288e2339ef07d7e0e39797650e5b91c namespace=k8s.io Aug 13 00:21:10.539992 containerd[2021]: time="2025-08-13T00:21:10.539342496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:10.982455 containerd[2021]: time="2025-08-13T00:21:10.982387131Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:21:11.006860 containerd[2021]: time="2025-08-13T00:21:11.006788819Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a\"" Aug 13 00:21:11.008426 containerd[2021]: time="2025-08-13T00:21:11.008317571Z" level=info msg="StartContainer for \"58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a\"" Aug 13 00:21:11.070225 systemd[1]: Started cri-containerd-58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a.scope - libcontainer container 58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a. Aug 13 00:21:11.118926 containerd[2021]: time="2025-08-13T00:21:11.117279155Z" level=info msg="StartContainer for \"58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a\" returns successfully" Aug 13 00:21:11.131191 systemd[1]: cri-containerd-58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a.scope: Deactivated successfully. Aug 13 00:21:11.167279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a-rootfs.mount: Deactivated successfully. Aug 13 00:21:11.177384 containerd[2021]: time="2025-08-13T00:21:11.177185808Z" level=info msg="shim disconnected" id=58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a namespace=k8s.io Aug 13 00:21:11.177384 containerd[2021]: time="2025-08-13T00:21:11.177305100Z" level=warning msg="cleaning up after shim disconnected" id=58a48f2eefb868911585ddcda5b790cd45d4b1f2a7237f2845f545ed2bec2f5a namespace=k8s.io Aug 13 00:21:11.177384 containerd[2021]: time="2025-08-13T00:21:11.177343440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:11.986831 containerd[2021]: time="2025-08-13T00:21:11.986656228Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:21:12.023910 containerd[2021]: time="2025-08-13T00:21:12.020873220Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34\"" Aug 13 00:21:12.023910 containerd[2021]: time="2025-08-13T00:21:12.022441848Z" level=info msg="StartContainer for \"5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34\"" Aug 13 00:21:12.083260 systemd[1]: run-containerd-runc-k8s.io-5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34-runc.rLOeT0.mount: Deactivated successfully. Aug 13 00:21:12.095226 systemd[1]: Started cri-containerd-5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34.scope - libcontainer container 5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34. Aug 13 00:21:12.154806 containerd[2021]: time="2025-08-13T00:21:12.154729573Z" level=info msg="StartContainer for \"5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34\" returns successfully" Aug 13 00:21:12.158494 systemd[1]: cri-containerd-5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34.scope: Deactivated successfully. Aug 13 00:21:12.203009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34-rootfs.mount: Deactivated successfully. Aug 13 00:21:12.212753 containerd[2021]: time="2025-08-13T00:21:12.212609377Z" level=info msg="shim disconnected" id=5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34 namespace=k8s.io Aug 13 00:21:12.212753 containerd[2021]: time="2025-08-13T00:21:12.212708809Z" level=warning msg="cleaning up after shim disconnected" id=5bb75122280657f83287252fa6bbd4fdbee46f498e44166a7f42795dce08ec34 namespace=k8s.io Aug 13 00:21:12.212753 containerd[2021]: time="2025-08-13T00:21:12.212732569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:12.997904 containerd[2021]: time="2025-08-13T00:21:12.997777865Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:21:13.026926 containerd[2021]: time="2025-08-13T00:21:13.024517297Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90\"" Aug 13 00:21:13.031532 containerd[2021]: time="2025-08-13T00:21:13.031165741Z" level=info msg="StartContainer for \"c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90\"" Aug 13 00:21:13.091503 systemd[1]: run-containerd-runc-k8s.io-c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90-runc.Kk6Pcs.mount: Deactivated successfully. Aug 13 00:21:13.104254 systemd[1]: Started cri-containerd-c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90.scope - libcontainer container c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90. Aug 13 00:21:13.171046 systemd[1]: cri-containerd-c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90.scope: Deactivated successfully. Aug 13 00:21:13.177471 containerd[2021]: time="2025-08-13T00:21:13.177224666Z" level=info msg="StartContainer for \"c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90\" returns successfully" Aug 13 00:21:13.216482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90-rootfs.mount: Deactivated successfully. Aug 13 00:21:13.226959 containerd[2021]: time="2025-08-13T00:21:13.226720610Z" level=info msg="shim disconnected" id=c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90 namespace=k8s.io Aug 13 00:21:13.226959 containerd[2021]: time="2025-08-13T00:21:13.226813010Z" level=warning msg="cleaning up after shim disconnected" id=c842f5f0d18435a1e03b34f4c1913ea6305389964ea4554b38234cd8f8bbbb90 namespace=k8s.io Aug 13 00:21:13.226959 containerd[2021]: time="2025-08-13T00:21:13.226839146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:13.695432 kubelet[3219]: E0813 00:21:13.695374 3219 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:21:14.001148 containerd[2021]: time="2025-08-13T00:21:14.001053854Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:21:14.032488 containerd[2021]: time="2025-08-13T00:21:14.031466414Z" level=info msg="CreateContainer within sandbox \"be76de77e9684f91f617050febd4381c87f1e46b31d6e569fb0fb2ec8fcb58ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90\"" Aug 13 00:21:14.034793 containerd[2021]: time="2025-08-13T00:21:14.032987942Z" level=info msg="StartContainer for \"d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90\"" Aug 13 00:21:14.109334 systemd[1]: Started cri-containerd-d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90.scope - libcontainer container d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90. Aug 13 00:21:14.169504 containerd[2021]: time="2025-08-13T00:21:14.169422951Z" level=info msg="StartContainer for \"d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90\" returns successfully" Aug 13 00:21:15.000928 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 13 00:21:15.049389 kubelet[3219]: I0813 00:21:15.048359 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ssxvj" podStartSLOduration=6.048335811 podStartE2EDuration="6.048335811s" podCreationTimestamp="2025-08-13 00:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:15.046730151 +0000 UTC m=+141.954712414" watchObservedRunningTime="2025-08-13 00:21:15.048335811 +0000 UTC m=+141.956318038" Aug 13 00:21:16.286591 kubelet[3219]: I0813 00:21:16.285126 3219 setters.go:618] "Node became not ready" node="ip-172-31-19-145" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:21:16Z","lastTransitionTime":"2025-08-13T00:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:21:17.000233 systemd[1]: run-containerd-runc-k8s.io-d515c9bd417fab45e332ae0023c85acd6d887f57f71941c2aded7f8a94936e90-runc.gLbTZS.mount: Deactivated successfully. Aug 13 00:21:19.385799 systemd-networkd[1929]: lxc_health: Link UP Aug 13 00:21:19.396869 (udev-worker)[6169]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:21:19.399622 systemd-networkd[1929]: lxc_health: Gained carrier Aug 13 00:21:19.464901 kubelet[3219]: E0813 00:21:19.464676 3219 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53492->127.0.0.1:44333: write tcp 127.0.0.1:53492->127.0.0.1:44333: write: connection reset by peer Aug 13 00:21:21.146468 systemd-networkd[1929]: lxc_health: Gained IPv6LL Aug 13 00:21:23.810163 ntpd[1987]: Listen normally on 15 lxc_health [fe80::bc47:94ff:fe6b:b374%14]:123 Aug 13 00:21:23.810844 ntpd[1987]: 13 Aug 00:21:23 ntpd[1987]: Listen normally on 15 lxc_health [fe80::bc47:94ff:fe6b:b374%14]:123 Aug 13 00:21:26.376267 sshd[5354]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:26.384209 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:21:26.386843 systemd[1]: sshd@27-172.31.19.145:22-139.178.89.65:47114.service: Deactivated successfully. Aug 13 00:21:26.397007 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:21:26.401329 systemd-logind[1993]: Removed session 28. Aug 13 00:21:40.194219 systemd[1]: cri-containerd-e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424.scope: Deactivated successfully. Aug 13 00:21:40.194756 systemd[1]: cri-containerd-e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424.scope: Consumed 5.604s CPU time, 22.3M memory peak, 0B memory swap peak. Aug 13 00:21:40.236739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424-rootfs.mount: Deactivated successfully. Aug 13 00:21:40.246250 containerd[2021]: time="2025-08-13T00:21:40.246149836Z" level=info msg="shim disconnected" id=e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424 namespace=k8s.io Aug 13 00:21:40.246250 containerd[2021]: time="2025-08-13T00:21:40.246233860Z" level=warning msg="cleaning up after shim disconnected" id=e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424 namespace=k8s.io Aug 13 00:21:40.247241 containerd[2021]: time="2025-08-13T00:21:40.246257296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:40.268123 containerd[2021]: time="2025-08-13T00:21:40.268045000Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:21:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:21:41.091768 kubelet[3219]: I0813 00:21:41.091668 3219 scope.go:117] "RemoveContainer" containerID="e0bdb9d42b89aaf8917616ea139ffd591e75732ae256b1e4b13bcf3384e90424" Aug 13 00:21:41.096093 containerd[2021]: time="2025-08-13T00:21:41.095996512Z" level=info msg="CreateContainer within sandbox \"81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:21:41.126098 containerd[2021]: time="2025-08-13T00:21:41.126034336Z" level=info msg="CreateContainer within sandbox \"81f776aa1f5bb1e401f5ad001a5916a1fa057380bd82a6f33f3fc73519caa27d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8a639b4b405ef4b9cd250dbe07cd26e2c047ec993e67eb3e1f1186a4228d5d63\"" Aug 13 00:21:41.127103 containerd[2021]: time="2025-08-13T00:21:41.126749140Z" level=info msg="StartContainer for \"8a639b4b405ef4b9cd250dbe07cd26e2c047ec993e67eb3e1f1186a4228d5d63\"" Aug 13 00:21:41.183222 systemd[1]: Started cri-containerd-8a639b4b405ef4b9cd250dbe07cd26e2c047ec993e67eb3e1f1186a4228d5d63.scope - libcontainer container 8a639b4b405ef4b9cd250dbe07cd26e2c047ec993e67eb3e1f1186a4228d5d63. Aug 13 00:21:41.256853 containerd[2021]: time="2025-08-13T00:21:41.256682717Z" level=info msg="StartContainer for \"8a639b4b405ef4b9cd250dbe07cd26e2c047ec993e67eb3e1f1186a4228d5d63\" returns successfully" Aug 13 00:21:46.678323 systemd[1]: cri-containerd-56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b.scope: Deactivated successfully. Aug 13 00:21:46.679152 systemd[1]: cri-containerd-56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b.scope: Consumed 5.277s CPU time, 14.1M memory peak, 0B memory swap peak. Aug 13 00:21:46.718985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b-rootfs.mount: Deactivated successfully. Aug 13 00:21:46.733297 containerd[2021]: time="2025-08-13T00:21:46.733021920Z" level=info msg="shim disconnected" id=56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b namespace=k8s.io Aug 13 00:21:46.733297 containerd[2021]: time="2025-08-13T00:21:46.733098804Z" level=warning msg="cleaning up after shim disconnected" id=56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b namespace=k8s.io Aug 13 00:21:46.733297 containerd[2021]: time="2025-08-13T00:21:46.733122696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:47.029957 kubelet[3219]: E0813 00:21:47.029668 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": context deadline exceeded" Aug 13 00:21:47.119858 kubelet[3219]: I0813 00:21:47.119814 3219 scope.go:117] "RemoveContainer" containerID="56c75af2f49bee5170dce06615edf6b4160ae1d39288025a93f0b744c501460b" Aug 13 00:21:47.123299 containerd[2021]: time="2025-08-13T00:21:47.123117346Z" level=info msg="CreateContainer within sandbox \"50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:21:47.152028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444461693.mount: Deactivated successfully. Aug 13 00:21:47.154953 containerd[2021]: time="2025-08-13T00:21:47.154853146Z" level=info msg="CreateContainer within sandbox \"50a07e492f79756a719a483337553ece6cd84596d37fb3f0f82e3e765116ac7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5cbc5c4b7450bfd5a2990b18c5fbec85fd8ae733018a56452963fa1d352518dc\"" Aug 13 00:21:47.155921 containerd[2021]: time="2025-08-13T00:21:47.155768542Z" level=info msg="StartContainer for \"5cbc5c4b7450bfd5a2990b18c5fbec85fd8ae733018a56452963fa1d352518dc\"" Aug 13 00:21:47.211229 systemd[1]: Started cri-containerd-5cbc5c4b7450bfd5a2990b18c5fbec85fd8ae733018a56452963fa1d352518dc.scope - libcontainer container 5cbc5c4b7450bfd5a2990b18c5fbec85fd8ae733018a56452963fa1d352518dc. Aug 13 00:21:47.275085 containerd[2021]: time="2025-08-13T00:21:47.274837283Z" level=info msg="StartContainer for \"5cbc5c4b7450bfd5a2990b18c5fbec85fd8ae733018a56452963fa1d352518dc\" returns successfully" Aug 13 00:21:53.397867 containerd[2021]: time="2025-08-13T00:21:53.397734245Z" level=info msg="StopPodSandbox for \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\"" Aug 13 00:21:53.399394 containerd[2021]: time="2025-08-13T00:21:53.397892513Z" level=info msg="TearDown network for sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" successfully" Aug 13 00:21:53.399394 containerd[2021]: time="2025-08-13T00:21:53.397920461Z" level=info msg="StopPodSandbox for \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" returns successfully" Aug 13 00:21:53.399394 containerd[2021]: time="2025-08-13T00:21:53.398705153Z" level=info msg="RemovePodSandbox for \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\"" Aug 13 00:21:53.399394 containerd[2021]: time="2025-08-13T00:21:53.398750345Z" level=info msg="Forcibly stopping sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\"" Aug 13 00:21:53.399851 containerd[2021]: time="2025-08-13T00:21:53.398873825Z" level=info msg="TearDown network for sandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" successfully" Aug 13 00:21:53.406049 containerd[2021]: time="2025-08-13T00:21:53.405970505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:53.406406 containerd[2021]: time="2025-08-13T00:21:53.406070897Z" level=info msg="RemovePodSandbox \"edf1fa7917a4edcd8c80c689b540b858a7adcbb33bda457d7c1dfa2e57ae7d05\" returns successfully" Aug 13 00:21:53.406852 containerd[2021]: time="2025-08-13T00:21:53.406780793Z" level=info msg="StopPodSandbox for \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\"" Aug 13 00:21:53.406973 containerd[2021]: time="2025-08-13T00:21:53.406947905Z" level=info msg="TearDown network for sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" successfully" Aug 13 00:21:53.407037 containerd[2021]: time="2025-08-13T00:21:53.406973729Z" level=info msg="StopPodSandbox for \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" returns successfully" Aug 13 00:21:53.407684 containerd[2021]: time="2025-08-13T00:21:53.407514725Z" level=info msg="RemovePodSandbox for \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\"" Aug 13 00:21:53.407684 containerd[2021]: time="2025-08-13T00:21:53.407560661Z" level=info msg="Forcibly stopping sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\"" Aug 13 00:21:53.407684 containerd[2021]: time="2025-08-13T00:21:53.407659001Z" level=info msg="TearDown network for sandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" successfully" Aug 13 00:21:53.413714 containerd[2021]: time="2025-08-13T00:21:53.413644553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:53.413853 containerd[2021]: time="2025-08-13T00:21:53.413737229Z" level=info msg="RemovePodSandbox \"03626eb2474a8bbbe3c50f1f8b8a3d479b9e289e3170c22ff423cbd3da3c9590\" returns successfully" Aug 13 00:21:57.031098 kubelet[3219]: E0813 00:21:57.030550 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"