Sep 4 23:53:37.894364 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:53:37.894403 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:53:37.894423 kernel: BIOS-provided physical RAM map: Sep 4 23:53:37.894436 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 23:53:37.894448 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 4 23:53:37.894460 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 23:53:37.894475 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 23:53:37.894489 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 23:53:37.894502 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 23:53:37.894981 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 23:53:37.895003 kernel: NX (Execute Disable) protection: active Sep 4 23:53:37.895017 kernel: APIC: Static calls initialized Sep 4 23:53:37.895028 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 4 23:53:37.895041 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 4 23:53:37.895057 kernel: extended physical RAM map: Sep 4 23:53:37.895071 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 23:53:37.895089 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 4 23:53:37.895101 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 4 23:53:37.895115 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 4 23:53:37.895128 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 23:53:37.895141 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 23:53:37.895155 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 23:53:37.895169 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 23:53:37.895183 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 23:53:37.895198 kernel: efi: EFI v2.7 by EDK II Sep 4 23:53:37.895211 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 4 23:53:37.895228 kernel: secureboot: Secure boot disabled Sep 4 23:53:37.895242 kernel: SMBIOS 2.7 present. Sep 4 23:53:37.895268 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 23:53:37.895281 kernel: Hypervisor detected: KVM Sep 4 23:53:37.895294 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:53:37.895307 kernel: kvm-clock: using sched offset of 4178902833 cycles Sep 4 23:53:37.895321 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:53:37.895334 kernel: tsc: Detected 2499.998 MHz processor Sep 4 23:53:37.895349 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:53:37.895363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:53:37.895377 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 4 23:53:37.895395 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 23:53:37.895409 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:53:37.895424 kernel: Using GB pages for direct mapping Sep 4 23:53:37.895443 kernel: ACPI: Early table checksum verification disabled Sep 4 23:53:37.895458 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 4 23:53:37.895474 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 23:53:37.895492 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 23:53:37.897060 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 23:53:37.897090 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 4 23:53:37.897106 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 23:53:37.897124 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 23:53:37.897137 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 23:53:37.897156 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 23:53:37.897170 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 23:53:37.897189 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 23:53:37.897203 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 23:53:37.897218 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 4 23:53:37.897232 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 4 23:53:37.897247 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 4 23:53:37.897269 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 4 23:53:37.897282 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 4 23:53:37.897295 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 4 23:53:37.897313 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 4 23:53:37.897327 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 4 23:53:37.897341 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 4 23:53:37.897356 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 4 23:53:37.897371 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 4 23:53:37.897385 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 4 23:53:37.897400 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 23:53:37.897416 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 23:53:37.897431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 23:53:37.897446 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 23:53:37.897465 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 4 23:53:37.897480 kernel: Zone ranges: Sep 4 23:53:37.897495 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:53:37.897523 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 4 23:53:37.897537 kernel: Normal empty Sep 4 23:53:37.897549 kernel: Movable zone start for each node Sep 4 23:53:37.897562 kernel: Early memory node ranges Sep 4 23:53:37.897575 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 23:53:37.897590 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 4 23:53:37.897609 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 4 23:53:37.897624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 4 23:53:37.897639 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:53:37.897654 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 23:53:37.897668 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 23:53:37.897681 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 4 23:53:37.897696 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 23:53:37.897711 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:53:37.897724 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 23:53:37.897743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:53:37.897758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:53:37.897772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:53:37.897787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:53:37.897801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:53:37.897816 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:53:37.897830 kernel: TSC deadline timer available Sep 4 23:53:37.897845 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 23:53:37.897859 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:53:37.897878 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 4 23:53:37.897894 kernel: Booting paravirtualized kernel on KVM Sep 4 23:53:37.897910 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:53:37.897925 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 23:53:37.897941 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 4 23:53:37.897957 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 4 23:53:37.897973 kernel: pcpu-alloc: [0] 0 1 Sep 4 23:53:37.897989 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:53:37.898005 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:53:37.898028 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:53:37.898053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:53:37.898067 kernel: random: crng init done Sep 4 23:53:37.898100 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:53:37.898116 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 23:53:37.898133 kernel: Fallback order for Node 0: 0 Sep 4 23:53:37.898150 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 4 23:53:37.898166 kernel: Policy zone: DMA32 Sep 4 23:53:37.898187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:53:37.898204 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 165012K reserved, 0K cma-reserved) Sep 4 23:53:37.898221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:53:37.898237 kernel: Kernel/User page tables isolation: enabled Sep 4 23:53:37.898255 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:53:37.898284 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:53:37.898305 kernel: Dynamic Preempt: voluntary Sep 4 23:53:37.898323 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:53:37.898342 kernel: rcu: RCU event tracing is enabled. Sep 4 23:53:37.898360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:53:37.898379 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:53:37.898396 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:53:37.898418 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:53:37.898435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:53:37.898452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:53:37.898469 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 23:53:37.898487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:53:37.899571 kernel: Console: colour dummy device 80x25 Sep 4 23:53:37.899598 kernel: printk: console [tty0] enabled Sep 4 23:53:37.899615 kernel: printk: console [ttyS0] enabled Sep 4 23:53:37.899632 kernel: ACPI: Core revision 20230628 Sep 4 23:53:37.899648 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 23:53:37.899664 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:53:37.899680 kernel: x2apic enabled Sep 4 23:53:37.899695 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:53:37.899712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 4 23:53:37.899734 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 4 23:53:37.899752 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 23:53:37.899770 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 4 23:53:37.899787 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:53:37.899804 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:53:37.899820 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:53:37.899836 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 23:53:37.899852 kernel: RETBleed: Vulnerable Sep 4 23:53:37.899867 kernel: Speculative Store Bypass: Vulnerable Sep 4 23:53:37.899882 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 23:53:37.899900 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 23:53:37.899915 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 23:53:37.899930 kernel: active return thunk: its_return_thunk Sep 4 23:53:37.899945 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 23:53:37.899960 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:53:37.899976 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:53:37.899992 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:53:37.900007 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 23:53:37.900023 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 23:53:37.900038 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 23:53:37.900053 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 23:53:37.900072 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 23:53:37.900087 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 23:53:37.900103 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:53:37.900118 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 23:53:37.900133 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 23:53:37.900148 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 23:53:37.900164 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 23:53:37.900179 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 23:53:37.900195 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 23:53:37.900211 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 23:53:37.900226 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:53:37.900245 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:53:37.900260 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:53:37.900276 kernel: landlock: Up and running. Sep 4 23:53:37.900291 kernel: SELinux: Initializing. Sep 4 23:53:37.900307 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:53:37.900322 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:53:37.900339 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 23:53:37.900355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:53:37.900371 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:53:37.900388 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:53:37.900404 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 23:53:37.900423 kernel: signal: max sigframe size: 3632 Sep 4 23:53:37.900439 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:53:37.900455 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:53:37.900472 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 23:53:37.900488 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:53:37.900504 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:53:37.901554 kernel: .... node #0, CPUs: #1 Sep 4 23:53:37.901582 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 23:53:37.901605 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 23:53:37.901621 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:53:37.901636 kernel: smpboot: Max logical packages: 1 Sep 4 23:53:37.901652 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 4 23:53:37.901668 kernel: devtmpfs: initialized Sep 4 23:53:37.901683 kernel: x86/mm: Memory block size: 128MB Sep 4 23:53:37.901699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 4 23:53:37.901715 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:53:37.901731 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:53:37.901750 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:53:37.901765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:53:37.901779 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:53:37.901795 kernel: audit: type=2000 audit(1757030018.236:1): state=initialized audit_enabled=0 res=1 Sep 4 23:53:37.901811 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:53:37.901827 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:53:37.901843 kernel: cpuidle: using governor menu Sep 4 23:53:37.901859 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:53:37.901875 kernel: dca service started, version 1.12.1 Sep 4 23:53:37.901894 kernel: PCI: Using configuration type 1 for base access Sep 4 23:53:37.901910 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:53:37.901926 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:53:37.901943 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:53:37.901959 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:53:37.901974 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:53:37.901990 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:53:37.902006 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:53:37.902022 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:53:37.902090 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 23:53:37.902107 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:53:37.902123 kernel: ACPI: Interpreter enabled Sep 4 23:53:37.902140 kernel: ACPI: PM: (supports S0 S5) Sep 4 23:53:37.902156 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:53:37.902173 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:53:37.902189 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:53:37.902206 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 23:53:37.902222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:53:37.902444 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:53:37.903675 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 23:53:37.903841 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 23:53:37.903864 kernel: acpiphp: Slot [3] registered Sep 4 23:53:37.903882 kernel: acpiphp: Slot [4] registered Sep 4 23:53:37.903899 kernel: acpiphp: Slot [5] registered Sep 4 23:53:37.903916 kernel: acpiphp: Slot [6] registered Sep 4 23:53:37.903936 kernel: acpiphp: Slot [7] registered Sep 4 23:53:37.903953 kernel: acpiphp: Slot [8] registered Sep 4 23:53:37.903968 kernel: acpiphp: Slot [9] registered Sep 4 23:53:37.903984 kernel: acpiphp: Slot [10] registered Sep 4 23:53:37.904001 kernel: acpiphp: Slot [11] registered Sep 4 23:53:37.904018 kernel: acpiphp: Slot [12] registered Sep 4 23:53:37.904035 kernel: acpiphp: Slot [13] registered Sep 4 23:53:37.904052 kernel: acpiphp: Slot [14] registered Sep 4 23:53:37.904069 kernel: acpiphp: Slot [15] registered Sep 4 23:53:37.904085 kernel: acpiphp: Slot [16] registered Sep 4 23:53:37.904106 kernel: acpiphp: Slot [17] registered Sep 4 23:53:37.904122 kernel: acpiphp: Slot [18] registered Sep 4 23:53:37.904138 kernel: acpiphp: Slot [19] registered Sep 4 23:53:37.904155 kernel: acpiphp: Slot [20] registered Sep 4 23:53:37.904173 kernel: acpiphp: Slot [21] registered Sep 4 23:53:37.904190 kernel: acpiphp: Slot [22] registered Sep 4 23:53:37.904206 kernel: acpiphp: Slot [23] registered Sep 4 23:53:37.904222 kernel: acpiphp: Slot [24] registered Sep 4 23:53:37.904238 kernel: acpiphp: Slot [25] registered Sep 4 23:53:37.904258 kernel: acpiphp: Slot [26] registered Sep 4 23:53:37.904275 kernel: acpiphp: Slot [27] registered Sep 4 23:53:37.904291 kernel: acpiphp: Slot [28] registered Sep 4 23:53:37.904309 kernel: acpiphp: Slot [29] registered Sep 4 23:53:37.904327 kernel: acpiphp: Slot [30] registered Sep 4 23:53:37.904343 kernel: acpiphp: Slot [31] registered Sep 4 23:53:37.904360 kernel: PCI host bridge to bus 0000:00 Sep 4 23:53:37.904537 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:53:37.904667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:53:37.904796 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:53:37.904920 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 23:53:37.905042 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 4 23:53:37.905163 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:53:37.905318 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 23:53:37.905477 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 23:53:37.907733 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 4 23:53:37.907910 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 23:53:37.908075 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 23:53:37.908234 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 23:53:37.908397 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 23:53:37.912474 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 23:53:37.912656 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 23:53:37.912802 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 23:53:37.912959 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 4 23:53:37.913098 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 4 23:53:37.913234 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 23:53:37.913371 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 4 23:53:37.913563 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:53:37.913731 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 23:53:37.913874 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 4 23:53:37.914016 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 23:53:37.914182 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 4 23:53:37.914203 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:53:37.914221 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:53:37.914238 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:53:37.914255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:53:37.914276 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 23:53:37.914294 kernel: iommu: Default domain type: Translated Sep 4 23:53:37.914311 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:53:37.914328 kernel: efivars: Registered efivars operations Sep 4 23:53:37.914344 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:53:37.914362 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:53:37.914379 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 4 23:53:37.914395 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 4 23:53:37.914411 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 4 23:53:37.919297 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 23:53:37.919469 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 23:53:37.919626 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:53:37.919648 kernel: vgaarb: loaded Sep 4 23:53:37.919666 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 23:53:37.919682 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 23:53:37.919699 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:53:37.919715 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:53:37.919732 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:53:37.919753 kernel: pnp: PnP ACPI init Sep 4 23:53:37.919769 kernel: pnp: PnP ACPI: found 5 devices Sep 4 23:53:37.919786 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:53:37.919802 kernel: NET: Registered PF_INET protocol family Sep 4 23:53:37.919819 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:53:37.919835 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 23:53:37.919851 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:53:37.919868 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 23:53:37.919887 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 23:53:37.919904 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 23:53:37.919920 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:53:37.919936 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:53:37.919952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:53:37.919968 kernel: NET: Registered PF_XDP protocol family Sep 4 23:53:37.920098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:53:37.920221 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:53:37.920342 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:53:37.920466 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 23:53:37.920601 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 4 23:53:37.920743 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 23:53:37.920765 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:53:37.920781 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 23:53:37.920796 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 4 23:53:37.920812 kernel: clocksource: Switched to clocksource tsc Sep 4 23:53:37.920828 kernel: Initialise system trusted keyrings Sep 4 23:53:37.920848 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 23:53:37.920864 kernel: Key type asymmetric registered Sep 4 23:53:37.920880 kernel: Asymmetric key parser 'x509' registered Sep 4 23:53:37.920896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:53:37.920913 kernel: io scheduler mq-deadline registered Sep 4 23:53:37.920929 kernel: io scheduler kyber registered Sep 4 23:53:37.920945 kernel: io scheduler bfq registered Sep 4 23:53:37.920962 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:53:37.920977 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:53:37.920992 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:53:37.921010 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:53:37.921027 kernel: i8042: Warning: Keylock active Sep 4 23:53:37.921043 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:53:37.921060 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:53:37.921222 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 23:53:37.921363 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 23:53:37.922198 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T23:53:37 UTC (1757030017) Sep 4 23:53:37.922382 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 23:53:37.922406 kernel: intel_pstate: CPU model not supported Sep 4 23:53:37.922423 kernel: efifb: probing for efifb Sep 4 23:53:37.922442 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 4 23:53:37.922483 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 4 23:53:37.922504 kernel: efifb: scrolling: redraw Sep 4 23:53:37.922537 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:53:37.922554 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 23:53:37.922580 kernel: fb0: EFI VGA frame buffer device Sep 4 23:53:37.922619 kernel: pstore: Using crash dump compression: deflate Sep 4 23:53:37.922647 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 23:53:37.922666 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:53:37.922684 kernel: Segment Routing with IPv6 Sep 4 23:53:37.922701 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:53:37.922718 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:53:37.922733 kernel: Key type dns_resolver registered Sep 4 23:53:37.922749 kernel: IPI shorthand broadcast: enabled Sep 4 23:53:37.922766 kernel: sched_clock: Marking stable (454001769, 170532305)->(706773958, -82239884) Sep 4 23:53:37.922787 kernel: registered taskstats version 1 Sep 4 23:53:37.922802 kernel: Loading compiled-in X.509 certificates Sep 4 23:53:37.922818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:53:37.922834 kernel: Key type .fscrypt registered Sep 4 23:53:37.922853 kernel: Key type fscrypt-provisioning registered Sep 4 23:53:37.922875 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:53:37.922890 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:53:37.922906 kernel: ima: No architecture policies found Sep 4 23:53:37.922923 kernel: clk: Disabling unused clocks Sep 4 23:53:37.922943 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:53:37.922959 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:53:37.922974 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:53:37.922991 kernel: Run /init as init process Sep 4 23:53:37.923007 kernel: with arguments: Sep 4 23:53:37.923024 kernel: /init Sep 4 23:53:37.923041 kernel: with environment: Sep 4 23:53:37.923058 kernel: HOME=/ Sep 4 23:53:37.923075 kernel: TERM=linux Sep 4 23:53:37.923097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:53:37.923117 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:53:37.923141 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:53:37.923160 systemd[1]: Detected virtualization amazon. Sep 4 23:53:37.923179 systemd[1]: Detected architecture x86-64. Sep 4 23:53:37.923201 systemd[1]: Running in initrd. Sep 4 23:53:37.923219 systemd[1]: No hostname configured, using default hostname. Sep 4 23:53:37.923238 systemd[1]: Hostname set to . Sep 4 23:53:37.923257 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:53:37.923287 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:53:37.923306 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:53:37.923325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:53:37.923347 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:53:37.923367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:53:37.923386 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:53:37.923406 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:53:37.923426 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:53:37.923443 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:53:37.923460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:53:37.923481 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:53:37.923499 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:53:37.926063 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:53:37.926086 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:53:37.926120 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:53:37.926139 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:53:37.926158 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:53:37.926176 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:53:37.926200 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:53:37.926217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:53:37.926236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:53:37.926254 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:53:37.926273 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:53:37.926291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:53:37.926308 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:53:37.926327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:53:37.926345 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:53:37.926367 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:53:37.926385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:53:37.926403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:53:37.926422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:53:37.926478 systemd-journald[179]: Collecting audit messages is disabled. Sep 4 23:53:37.926550 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:53:37.926574 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:53:37.926594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:53:37.926616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:37.926636 systemd-journald[179]: Journal started Sep 4 23:53:37.926674 systemd-journald[179]: Runtime Journal (/run/log/journal/ec264fec8455d9b5cd29498b13f48d8f) is 4.7M, max 38.2M, 33.4M free. Sep 4 23:53:37.922904 systemd-modules-load[180]: Inserted module 'overlay' Sep 4 23:53:37.932007 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:53:37.932092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:53:37.943682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:53:37.948680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:53:37.953688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:53:37.976560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:53:37.980619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:53:37.988722 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:53:37.990381 kernel: Bridge firewalling registered Sep 4 23:53:37.989142 systemd-modules-load[180]: Inserted module 'br_netfilter' Sep 4 23:53:37.990946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:53:37.992760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:53:37.993600 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:53:38.005597 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:53:38.006143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:53:38.007715 dracut-cmdline[208]: dracut-dracut-053 Sep 4 23:53:38.011894 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:53:38.018360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:53:38.027705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:53:38.083437 systemd-resolved[233]: Positive Trust Anchors: Sep 4 23:53:38.084491 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:53:38.084578 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:53:38.092753 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 4 23:53:38.094116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:53:38.095113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:53:38.108529 kernel: SCSI subsystem initialized Sep 4 23:53:38.118537 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:53:38.129618 kernel: iscsi: registered transport (tcp) Sep 4 23:53:38.150796 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:53:38.150879 kernel: QLogic iSCSI HBA Driver Sep 4 23:53:38.188260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:53:38.194655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:53:38.219687 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:53:38.219766 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:53:38.219790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:53:38.261538 kernel: raid6: avx512x4 gen() 17843 MB/s Sep 4 23:53:38.279531 kernel: raid6: avx512x2 gen() 17768 MB/s Sep 4 23:53:38.297533 kernel: raid6: avx512x1 gen() 17745 MB/s Sep 4 23:53:38.315528 kernel: raid6: avx2x4 gen() 17654 MB/s Sep 4 23:53:38.333528 kernel: raid6: avx2x2 gen() 17610 MB/s Sep 4 23:53:38.351677 kernel: raid6: avx2x1 gen() 13507 MB/s Sep 4 23:53:38.351714 kernel: raid6: using algorithm avx512x4 gen() 17843 MB/s Sep 4 23:53:38.371324 kernel: raid6: .... xor() 7314 MB/s, rmw enabled Sep 4 23:53:38.371363 kernel: raid6: using avx512x2 recovery algorithm Sep 4 23:53:38.392544 kernel: xor: automatically using best checksumming function avx Sep 4 23:53:38.544544 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:53:38.554758 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:53:38.560713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:53:38.576133 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 4 23:53:38.581828 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:53:38.591690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:53:38.609742 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 4 23:53:38.639091 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:53:38.644717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:53:38.696959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:53:38.704285 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:53:38.734893 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:53:38.737185 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:53:38.739158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:53:38.739669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:53:38.745723 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:53:38.780838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:53:38.794722 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 23:53:38.794989 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 23:53:38.821089 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 23:53:38.821376 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 23:53:38.821605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 23:53:38.826532 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:53:38.830527 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:64:37:f1:04:29 Sep 4 23:53:38.833959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:53:38.834897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:53:38.836861 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:53:38.839900 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 23:53:38.839092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:53:38.839358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:38.840385 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:53:38.853586 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:53:38.853619 kernel: GPT:9289727 != 16777215 Sep 4 23:53:38.853639 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:53:38.853658 kernel: GPT:9289727 != 16777215 Sep 4 23:53:38.853676 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:53:38.853695 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:53:38.846754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:53:38.852257 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:53:38.871972 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:53:38.872021 kernel: AES CTR mode by8 optimization enabled Sep 4 23:53:38.872251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:53:38.872406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:38.887475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:53:38.891026 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:53:38.909282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:38.917697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:53:38.940051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:53:38.979535 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (460) Sep 4 23:53:38.995551 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (455) Sep 4 23:53:39.029292 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 23:53:39.041175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 23:53:39.041690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 23:53:39.052429 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 23:53:39.077041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:53:39.087660 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:53:39.093576 disk-uuid[635]: Primary Header is updated. Sep 4 23:53:39.093576 disk-uuid[635]: Secondary Entries is updated. Sep 4 23:53:39.093576 disk-uuid[635]: Secondary Header is updated. Sep 4 23:53:39.099534 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:53:39.109545 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:53:40.117256 disk-uuid[636]: The operation has completed successfully. Sep 4 23:53:40.119141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:53:40.216392 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:53:40.216484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:53:40.265681 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:53:40.268968 sh[894]: Success Sep 4 23:53:40.282535 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 23:53:40.386258 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:53:40.394612 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:53:40.395647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:53:40.426747 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:53:40.426801 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:53:40.429722 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:53:40.429760 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:53:40.430946 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:53:40.504531 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:53:40.518878 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:53:40.519882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:53:40.525686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:53:40.527395 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:53:40.553859 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:53:40.553919 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:53:40.556015 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:53:40.574565 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:53:40.579565 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:53:40.585288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:53:40.590706 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:53:40.614257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:53:40.623709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:53:40.645583 systemd-networkd[1083]: lo: Link UP Sep 4 23:53:40.645773 systemd-networkd[1083]: lo: Gained carrier Sep 4 23:53:40.647142 systemd-networkd[1083]: Enumeration completed Sep 4 23:53:40.647442 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:53:40.647446 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:53:40.647701 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:53:40.648198 systemd[1]: Reached target network.target - Network. Sep 4 23:53:40.649288 systemd-networkd[1083]: eth0: Link UP Sep 4 23:53:40.649292 systemd-networkd[1083]: eth0: Gained carrier Sep 4 23:53:40.649300 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:53:40.662619 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.21.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:53:40.872776 ignition[1046]: Ignition 2.20.0 Sep 4 23:53:40.872787 ignition[1046]: Stage: fetch-offline Sep 4 23:53:40.872949 ignition[1046]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:40.872957 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:40.874566 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:53:40.873173 ignition[1046]: Ignition finished successfully Sep 4 23:53:40.877692 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:53:40.891449 ignition[1094]: Ignition 2.20.0 Sep 4 23:53:40.891460 ignition[1094]: Stage: fetch Sep 4 23:53:40.891761 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:40.891770 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:40.891853 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:40.899024 ignition[1094]: PUT result: OK Sep 4 23:53:40.900462 ignition[1094]: parsed url from cmdline: "" Sep 4 23:53:40.900472 ignition[1094]: no config URL provided Sep 4 23:53:40.900479 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:53:40.900501 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:53:40.900530 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:40.901029 ignition[1094]: PUT result: OK Sep 4 23:53:40.901057 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 23:53:40.901575 ignition[1094]: GET result: OK Sep 4 23:53:40.901649 ignition[1094]: parsing config with SHA512: 883c16dcfbca6a117ffec734b8ec876eefded3dbd6c6c999e4c677a6a57a77b3dfd8b9d3ae1757ac1429f5e654fee3fd4c6992bd45c168684d7ac75d08184aa3 Sep 4 23:53:40.905886 unknown[1094]: fetched base config from "system" Sep 4 23:53:40.906440 unknown[1094]: fetched base config from "system" Sep 4 23:53:40.906453 unknown[1094]: fetched user config from "aws" Sep 4 23:53:40.906880 ignition[1094]: fetch: fetch complete Sep 4 23:53:40.906884 ignition[1094]: fetch: fetch passed Sep 4 23:53:40.906929 ignition[1094]: Ignition finished successfully Sep 4 23:53:40.908804 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:53:40.912671 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:53:40.927933 ignition[1101]: Ignition 2.20.0 Sep 4 23:53:40.927944 ignition[1101]: Stage: kargs Sep 4 23:53:40.928232 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:40.928241 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:40.928314 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:40.929056 ignition[1101]: PUT result: OK Sep 4 23:53:40.931205 ignition[1101]: kargs: kargs passed Sep 4 23:53:40.931258 ignition[1101]: Ignition finished successfully Sep 4 23:53:40.932493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:53:40.938688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:53:40.950436 ignition[1108]: Ignition 2.20.0 Sep 4 23:53:40.950447 ignition[1108]: Stage: disks Sep 4 23:53:40.950746 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:40.950755 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:40.950840 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:40.951580 ignition[1108]: PUT result: OK Sep 4 23:53:40.953733 ignition[1108]: disks: disks passed Sep 4 23:53:40.953790 ignition[1108]: Ignition finished successfully Sep 4 23:53:40.955341 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:53:40.955858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:53:40.956157 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:53:40.956647 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:53:40.957117 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:53:40.957614 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:53:40.969678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:53:40.997459 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:53:40.999904 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:53:41.004629 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:53:41.096690 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:53:41.097410 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:53:41.098343 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:53:41.115636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:53:41.117627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:53:41.118800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:53:41.119212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:53:41.119238 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:53:41.128345 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:53:41.129984 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:53:41.137615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1135) Sep 4 23:53:41.141539 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:53:41.141572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:53:41.141585 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:53:41.154533 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:53:41.156248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:53:41.388039 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:53:41.392327 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:53:41.396963 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:53:41.400964 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:53:41.633135 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:53:41.638633 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:53:41.642451 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:53:41.647865 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:53:41.648758 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:53:41.668143 ignition[1248]: INFO : Ignition 2.20.0 Sep 4 23:53:41.668944 ignition[1248]: INFO : Stage: mount Sep 4 23:53:41.670028 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:41.670028 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:41.670028 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:41.671774 ignition[1248]: INFO : PUT result: OK Sep 4 23:53:41.673468 ignition[1248]: INFO : mount: mount passed Sep 4 23:53:41.674470 ignition[1248]: INFO : Ignition finished successfully Sep 4 23:53:41.675554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:53:41.687613 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:53:41.689537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:53:41.704699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:53:41.724534 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1260) Sep 4 23:53:41.724588 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:53:41.728068 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:53:41.728131 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:53:41.734533 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:53:41.736192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:53:41.753727 ignition[1277]: INFO : Ignition 2.20.0 Sep 4 23:53:41.753727 ignition[1277]: INFO : Stage: files Sep 4 23:53:41.754758 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:41.754758 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:41.754758 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:41.755691 ignition[1277]: INFO : PUT result: OK Sep 4 23:53:41.757400 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:53:41.757973 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:53:41.757973 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:53:41.794568 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:53:41.795310 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:53:41.795310 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:53:41.794969 unknown[1277]: wrote ssh authorized keys file for user: core Sep 4 23:53:41.808627 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:53:41.809365 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:53:41.952418 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:53:42.185742 systemd-networkd[1083]: eth0: Gained IPv6LL Sep 4 23:53:42.266328 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:53:42.266328 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:53:42.267893 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:53:42.389274 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:53:42.611741 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:53:42.611741 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:53:42.613585 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:53:42.838584 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:53:43.095339 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:53:43.095339 ignition[1277]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:53:43.109359 ignition[1277]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:53:43.110289 ignition[1277]: INFO : files: files passed Sep 4 23:53:43.110289 ignition[1277]: INFO : Ignition finished successfully Sep 4 23:53:43.111240 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:53:43.118735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:53:43.120579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:53:43.123272 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:53:43.123808 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:53:43.137059 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:53:43.137059 initrd-setup-root-after-ignition[1305]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:53:43.138788 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:53:43.140629 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:53:43.141191 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:53:43.145622 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:53:43.173601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:53:43.173718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:53:43.174788 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:53:43.175723 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:53:43.176419 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:53:43.177665 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:53:43.192975 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:53:43.199665 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:53:43.209344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:53:43.209911 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:53:43.210665 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:53:43.211343 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:53:43.211450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:53:43.212332 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:53:43.213057 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:53:43.213694 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:53:43.214334 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:53:43.214968 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:53:43.215615 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:53:43.216241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:53:43.216894 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:53:43.217815 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:53:43.218496 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:53:43.219125 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:53:43.219232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:53:43.220147 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:53:43.220806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:53:43.221376 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:53:43.222101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:53:43.222547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:53:43.222656 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:53:43.223847 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:53:43.223956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:53:43.224494 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:53:43.224601 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:53:43.230680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:53:43.234687 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:53:43.235109 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:53:43.235233 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:53:43.236167 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:53:43.236284 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:53:43.243601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:53:43.243695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:53:43.245940 ignition[1329]: INFO : Ignition 2.20.0 Sep 4 23:53:43.245940 ignition[1329]: INFO : Stage: umount Sep 4 23:53:43.247088 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:53:43.248691 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:53:43.249067 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:53:43.250940 ignition[1329]: INFO : PUT result: OK Sep 4 23:53:43.251710 ignition[1329]: INFO : umount: umount passed Sep 4 23:53:43.252048 ignition[1329]: INFO : Ignition finished successfully Sep 4 23:53:43.252737 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:53:43.252823 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:53:43.254079 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:53:43.254162 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:53:43.254576 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:53:43.254619 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:53:43.254941 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:53:43.254984 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:53:43.255332 systemd[1]: Stopped target network.target - Network. Sep 4 23:53:43.257563 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:53:43.257611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:53:43.257986 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:53:43.258249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:53:43.261560 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:53:43.261826 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:53:43.262118 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:53:43.262389 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:53:43.262428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:53:43.262735 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:53:43.262766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:53:43.263888 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:53:43.263932 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:53:43.264612 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:53:43.264651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:53:43.265051 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:53:43.265363 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:53:43.266887 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:53:43.271412 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:53:43.271505 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:53:43.274433 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:53:43.274823 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:53:43.274911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:53:43.276461 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:53:43.277066 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:53:43.277133 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:53:43.281616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:53:43.281940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:53:43.281991 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:53:43.282413 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:53:43.282451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:53:43.284581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:53:43.284620 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:53:43.285060 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:53:43.285097 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:53:43.285956 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:53:43.287992 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:53:43.288050 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:53:43.298375 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:53:43.298489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:53:43.300861 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:53:43.300981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:53:43.302041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:53:43.302105 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:53:43.302752 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:53:43.302782 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:53:43.303351 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:53:43.303393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:53:43.304347 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:53:43.304390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:53:43.305332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:53:43.305378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:53:43.313634 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:53:43.314297 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:53:43.314345 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:53:43.314913 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:53:43.314953 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:53:43.315304 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:53:43.315340 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:53:43.315675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:53:43.315710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:43.317020 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:53:43.317073 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:53:43.321876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:53:43.321957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:53:43.382287 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:53:43.382400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:53:43.383414 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:53:43.383914 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:53:43.383967 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:53:43.390634 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:53:43.398392 systemd[1]: Switching root. Sep 4 23:53:43.435774 systemd-journald[179]: Journal stopped Sep 4 23:53:45.061230 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Sep 4 23:53:45.061331 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:53:45.061355 kernel: SELinux: policy capability open_perms=1 Sep 4 23:53:45.061374 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:53:45.061393 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:53:45.061418 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:53:45.061437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:53:45.061461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:53:45.061485 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:53:45.063525 kernel: audit: type=1403 audit(1757030023.791:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:53:45.063566 systemd[1]: Successfully loaded SELinux policy in 80.664ms. Sep 4 23:53:45.063596 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.105ms. Sep 4 23:53:45.063618 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:53:45.063637 systemd[1]: Detected virtualization amazon. Sep 4 23:53:45.063657 systemd[1]: Detected architecture x86-64. Sep 4 23:53:45.063676 systemd[1]: Detected first boot. Sep 4 23:53:45.063695 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:53:45.063714 zram_generator::config[1375]: No configuration found. Sep 4 23:53:45.063738 kernel: Guest personality initialized and is inactive Sep 4 23:53:45.063756 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:53:45.063773 kernel: Initialized host personality Sep 4 23:53:45.063790 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:53:45.063808 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:53:45.063829 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:53:45.063849 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:53:45.063870 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:53:45.063894 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:53:45.063915 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:53:45.063937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:53:45.063957 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:53:45.063977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:53:45.063996 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:53:45.064015 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:53:45.064035 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:53:45.064059 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:53:45.064079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:53:45.064101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:53:45.064122 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:53:45.064146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:53:45.064164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:53:45.064182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:53:45.064199 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:53:45.064224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:53:45.064245 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:53:45.064265 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:53:45.064285 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:53:45.064304 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:53:45.064325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:53:45.064346 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:53:45.064367 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:53:45.064387 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:53:45.064412 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:53:45.064432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:53:45.064452 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:53:45.064474 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:53:45.064495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:53:45.065932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:53:45.065966 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:53:45.065989 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:53:45.066020 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:53:45.066047 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:53:45.066068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:53:45.066095 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:53:45.066116 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:53:45.066137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:53:45.066159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:53:45.066180 systemd[1]: Reached target machines.target - Containers. Sep 4 23:53:45.066201 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:53:45.066222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:53:45.066247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:53:45.066268 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:53:45.066289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:53:45.066310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:53:45.066331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:53:45.066351 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:53:45.066373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:53:45.066393 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:53:45.066418 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:53:45.066439 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:53:45.066460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:53:45.066482 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:53:45.066501 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:53:45.069551 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:53:45.069578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:53:45.069600 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:53:45.069626 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:53:45.069647 kernel: fuse: init (API version 7.39) Sep 4 23:53:45.069668 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:53:45.069689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:53:45.069711 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:53:45.069732 systemd[1]: Stopped verity-setup.service. Sep 4 23:53:45.069757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:53:45.069779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:53:45.069803 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:53:45.069824 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:53:45.069845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:53:45.069869 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:53:45.069890 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:53:45.069911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:53:45.069933 kernel: ACPI: bus type drm_connector registered Sep 4 23:53:45.069954 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:53:45.069976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:53:45.069997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:53:45.070027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:53:45.070053 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:53:45.070074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:53:45.070094 kernel: loop: module loaded Sep 4 23:53:45.070113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:53:45.070132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:53:45.070151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:53:45.070171 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:53:45.070189 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:53:45.070207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:53:45.070230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:53:45.070250 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:53:45.070268 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:53:45.070324 systemd-journald[1454]: Collecting audit messages is disabled. Sep 4 23:53:45.070359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:53:45.070379 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:53:45.070401 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:53:45.070421 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:53:45.070443 systemd-journald[1454]: Journal started Sep 4 23:53:45.070480 systemd-journald[1454]: Runtime Journal (/run/log/journal/ec264fec8455d9b5cd29498b13f48d8f) is 4.7M, max 38.2M, 33.4M free. Sep 4 23:53:44.719832 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:53:44.727737 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 23:53:44.728190 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:53:45.079534 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:53:45.090563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:53:45.094535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:53:45.105536 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:53:45.109531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:53:45.121590 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:53:45.125542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:53:45.139535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:53:45.149538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:53:45.155999 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:53:45.161639 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:53:45.165592 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:53:45.173468 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:53:45.175668 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:53:45.179659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:53:45.183134 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:53:45.184127 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:53:45.185334 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:53:45.187159 kernel: loop0: detected capacity change from 0 to 138176 Sep 4 23:53:45.188349 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:53:45.189753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:53:45.222072 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:53:45.224637 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:53:45.234047 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:53:45.244176 systemd-journald[1454]: Time spent on flushing to /var/log/journal/ec264fec8455d9b5cd29498b13f48d8f is 87.577ms for 1021 entries. Sep 4 23:53:45.244176 systemd-journald[1454]: System Journal (/var/log/journal/ec264fec8455d9b5cd29498b13f48d8f) is 8M, max 195.6M, 187.6M free. Sep 4 23:53:45.353948 systemd-journald[1454]: Received client request to flush runtime journal. Sep 4 23:53:45.354008 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:53:45.354047 kernel: loop1: detected capacity change from 0 to 224512 Sep 4 23:53:45.249763 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:53:45.252555 systemd-tmpfiles[1492]: ACLs are not supported, ignoring. Sep 4 23:53:45.252576 systemd-tmpfiles[1492]: ACLs are not supported, ignoring. Sep 4 23:53:45.253352 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:53:45.270610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:53:45.282744 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:53:45.320281 udevadm[1522]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:53:45.357168 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:53:45.369186 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:53:45.379449 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:53:45.391110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:53:45.416679 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Sep 4 23:53:45.421426 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Sep 4 23:53:45.428377 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:53:45.471542 kernel: loop2: detected capacity change from 0 to 147912 Sep 4 23:53:45.594555 kernel: loop3: detected capacity change from 0 to 62832 Sep 4 23:53:45.678542 kernel: loop4: detected capacity change from 0 to 138176 Sep 4 23:53:45.719554 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 23:53:45.734078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:53:45.754689 kernel: loop6: detected capacity change from 0 to 147912 Sep 4 23:53:45.780539 kernel: loop7: detected capacity change from 0 to 62832 Sep 4 23:53:45.801297 (sd-merge)[1538]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 23:53:45.802215 (sd-merge)[1538]: Merged extensions into '/usr'. Sep 4 23:53:45.808210 systemd[1]: Reload requested from client PID 1491 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:53:45.808227 systemd[1]: Reloading... Sep 4 23:53:45.886538 zram_generator::config[1562]: No configuration found. Sep 4 23:53:46.032067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:53:46.150927 systemd[1]: Reloading finished in 341 ms. Sep 4 23:53:46.172883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:53:46.174142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:53:46.190858 systemd[1]: Starting ensure-sysext.service... Sep 4 23:53:46.194713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:53:46.199713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:53:46.222839 systemd[1]: Reload requested from client PID 1618 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:53:46.222859 systemd[1]: Reloading... Sep 4 23:53:46.263631 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:53:46.264024 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:53:46.266636 systemd-udevd[1620]: Using default interface naming scheme 'v255'. Sep 4 23:53:46.268401 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:53:46.268867 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Sep 4 23:53:46.269454 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Sep 4 23:53:46.276438 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:53:46.276453 systemd-tmpfiles[1619]: Skipping /boot Sep 4 23:53:46.306189 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:53:46.306207 systemd-tmpfiles[1619]: Skipping /boot Sep 4 23:53:46.369041 zram_generator::config[1648]: No configuration found. Sep 4 23:53:46.488015 (udev-worker)[1659]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:53:46.608144 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1669) Sep 4 23:53:46.686771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:53:46.698577 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 23:53:46.698938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:53:46.743532 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:53:46.776539 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:53:46.780375 ldconfig[1484]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:53:46.801530 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 23:53:46.837540 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 23:53:46.888282 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:53:46.888558 systemd[1]: Reloading finished in 665 ms. Sep 4 23:53:46.904074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:53:46.906116 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:53:46.908131 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:53:46.947702 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:53:46.964115 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:53:46.970825 systemd[1]: Finished ensure-sysext.service. Sep 4 23:53:46.999625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:53:47.001266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:53:47.005743 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:53:47.009692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:53:47.012904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:53:47.015684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:53:47.018686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:53:47.021688 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:53:47.025156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:53:47.027814 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:53:47.029751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:53:47.037841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:53:47.040592 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:53:47.042619 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:53:47.044281 lvm[1818]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:53:47.052713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:53:47.061670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:53:47.063466 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:53:47.075727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:53:47.080429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:53:47.082596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:53:47.083670 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:53:47.087933 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:53:47.088171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:53:47.092453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:53:47.111730 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:53:47.116905 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:53:47.118613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:53:47.130538 lvm[1841]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:53:47.131771 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:53:47.141099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:53:47.141355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:53:47.142415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:53:47.151431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:53:47.151935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:53:47.153679 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:53:47.158719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:53:47.171996 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:53:47.176822 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:53:47.226290 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:53:47.228815 augenrules[1861]: No rules Sep 4 23:53:47.227633 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:53:47.228419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:53:47.236764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:53:47.245706 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:53:47.263863 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:53:47.266565 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:53:47.270719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:53:47.279060 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:53:47.351706 systemd-resolved[1831]: Positive Trust Anchors: Sep 4 23:53:47.351726 systemd-resolved[1831]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:53:47.351777 systemd-resolved[1831]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:53:47.354804 systemd-networkd[1828]: lo: Link UP Sep 4 23:53:47.354813 systemd-networkd[1828]: lo: Gained carrier Sep 4 23:53:47.356901 systemd-networkd[1828]: Enumeration completed Sep 4 23:53:47.357015 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:53:47.358157 systemd-networkd[1828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:53:47.358167 systemd-networkd[1828]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:53:47.358846 systemd-resolved[1831]: Defaulting to hostname 'linux'. Sep 4 23:53:47.360168 systemd-networkd[1828]: eth0: Link UP Sep 4 23:53:47.362472 systemd-networkd[1828]: eth0: Gained carrier Sep 4 23:53:47.362498 systemd-networkd[1828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:53:47.365046 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:53:47.368804 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:53:47.369455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:53:47.370639 systemd[1]: Reached target network.target - Network. Sep 4 23:53:47.371064 systemd-networkd[1828]: eth0: DHCPv4 address 172.31.21.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:53:47.371144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:53:47.371671 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:53:47.373688 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:53:47.374235 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:53:47.374949 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:53:47.375616 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:53:47.376119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:53:47.376562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:53:47.376604 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:53:47.377077 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:53:47.379022 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:53:47.382137 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:53:47.386885 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:53:47.387613 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:53:47.388135 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:53:47.399428 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:53:47.400680 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:53:47.402420 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:53:47.403057 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:53:47.404017 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:53:47.404544 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:53:47.405069 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:53:47.405104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:53:47.408613 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:53:47.412691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:53:47.415687 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:53:47.418593 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:53:47.422681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:53:47.423232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:53:47.426703 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:53:47.429777 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 23:53:47.452768 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:53:47.461470 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 23:53:47.479982 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:53:47.483713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:53:47.495759 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:53:47.497853 extend-filesystems[1890]: Found loop4 Sep 4 23:53:47.498793 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:53:47.499627 extend-filesystems[1890]: Found loop5 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found loop6 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found loop7 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p1 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p2 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p3 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found usr Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p4 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p6 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p7 Sep 4 23:53:47.499627 extend-filesystems[1890]: Found nvme0n1p9 Sep 4 23:53:47.499627 extend-filesystems[1890]: Checking size of /dev/nvme0n1p9 Sep 4 23:53:47.499860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:53:47.526445 jq[1889]: false Sep 4 23:53:47.506710 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:53:47.526437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:53:47.532939 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:53:47.533207 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:53:47.533619 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:53:47.533872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:53:47.544921 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:53:47.545158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:53:47.571814 dbus-daemon[1888]: [system] SELinux support is enabled Sep 4 23:53:47.572014 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:53:47.576710 dbus-daemon[1888]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1828 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 23:53:47.580765 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:53:47.580820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:53:47.584991 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:53:47.585025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:53:47.593631 dbus-daemon[1888]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:53:47.599687 extend-filesystems[1890]: Resized partition /dev/nvme0n1p9 Sep 4 23:53:47.607784 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 23:53:47.609014 jq[1909]: true Sep 4 23:53:47.621031 update_engine[1902]: I20250904 23:53:47.616902 1902 main.cc:92] Flatcar Update Engine starting Sep 4 23:53:47.629849 (ntainerd)[1925]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:53:47.634039 ntpd[1892]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:32:00 UTC 2025 (1): Starting Sep 4 23:53:47.649720 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 23:53:47.649775 extend-filesystems[1934]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:32:00 UTC 2025 (1): Starting Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: ---------------------------------------------------- Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: corporation. Support and training for ntp-4 are Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: available at https://www.nwtime.org/support Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: ---------------------------------------------------- Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: proto: precision = 0.072 usec (-24) Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: basedate set to 2025-08-23 Sep 4 23:53:47.659088 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: gps base set to 2025-08-24 (week 2381) Sep 4 23:53:47.643680 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:53:47.661995 update_engine[1902]: I20250904 23:53:47.640529 1902 update_check_scheduler.cc:74] Next update check in 10m20s Sep 4 23:53:47.634066 ntpd[1892]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:53:47.662279 tar[1912]: linux-amd64/LICENSE Sep 4 23:53:47.662279 tar[1912]: linux-amd64/helm Sep 4 23:53:47.657827 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:53:47.634077 ntpd[1892]: ---------------------------------------------------- Sep 4 23:53:47.634087 ntpd[1892]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:53:47.634096 ntpd[1892]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:53:47.634106 ntpd[1892]: corporation. Support and training for ntp-4 are Sep 4 23:53:47.634116 ntpd[1892]: available at https://www.nwtime.org/support Sep 4 23:53:47.634125 ntpd[1892]: ---------------------------------------------------- Sep 4 23:53:47.644639 ntpd[1892]: proto: precision = 0.072 usec (-24) Sep 4 23:53:47.646776 ntpd[1892]: basedate set to 2025-08-23 Sep 4 23:53:47.646798 ntpd[1892]: gps base set to 2025-08-24 (week 2381) Sep 4 23:53:47.671722 ntpd[1892]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listen normally on 3 eth0 172.31.21.112:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listen normally on 4 lo [::1]:123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: bind(21) AF_INET6 fe80::464:37ff:fef1:429%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: unable to create socket on eth0 (5) for fe80::464:37ff:fef1:429%2#123 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: failed to init interface for address fe80::464:37ff:fef1:429%2 Sep 4 23:53:47.673258 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: Listening on routing socket on fd #21 for interface updates Sep 4 23:53:47.671775 ntpd[1892]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:53:47.671974 ntpd[1892]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:53:47.672014 ntpd[1892]: Listen normally on 3 eth0 172.31.21.112:123 Sep 4 23:53:47.672056 ntpd[1892]: Listen normally on 4 lo [::1]:123 Sep 4 23:53:47.672105 ntpd[1892]: bind(21) AF_INET6 fe80::464:37ff:fef1:429%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:53:47.672128 ntpd[1892]: unable to create socket on eth0 (5) for fe80::464:37ff:fef1:429%2#123 Sep 4 23:53:47.672145 ntpd[1892]: failed to init interface for address fe80::464:37ff:fef1:429%2 Sep 4 23:53:47.672182 ntpd[1892]: Listening on routing socket on fd #21 for interface updates Sep 4 23:53:47.677734 jq[1932]: true Sep 4 23:53:47.685575 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:53:47.685575 ntpd[1892]: 4 Sep 23:53:47 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:53:47.683251 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:53:47.683282 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:53:47.692112 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 23:53:47.747416 coreos-metadata[1887]: Sep 04 23:53:47.747 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:53:47.771581 coreos-metadata[1887]: Sep 04 23:53:47.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 23:53:47.775597 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 23:53:47.778338 coreos-metadata[1887]: Sep 04 23:53:47.778 INFO Fetch successful Sep 4 23:53:47.778571 coreos-metadata[1887]: Sep 04 23:53:47.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 23:53:47.789673 coreos-metadata[1887]: Sep 04 23:53:47.789 INFO Fetch successful Sep 4 23:53:47.795695 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1669) Sep 4 23:53:47.795748 coreos-metadata[1887]: Sep 04 23:53:47.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 23:53:47.795748 coreos-metadata[1887]: Sep 04 23:53:47.792 INFO Fetch successful Sep 4 23:53:47.795748 coreos-metadata[1887]: Sep 04 23:53:47.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 23:53:47.796705 coreos-metadata[1887]: Sep 04 23:53:47.796 INFO Fetch successful Sep 4 23:53:47.796705 coreos-metadata[1887]: Sep 04 23:53:47.796 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 23:53:47.799240 extend-filesystems[1934]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 23:53:47.799240 extend-filesystems[1934]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:53:47.799240 extend-filesystems[1934]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 23:53:47.802155 extend-filesystems[1890]: Resized filesystem in /dev/nvme0n1p9 Sep 4 23:53:47.802491 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:53:47.802876 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:53:47.805803 coreos-metadata[1887]: Sep 04 23:53:47.805 INFO Fetch failed with 404: resource not found Sep 4 23:53:47.805803 coreos-metadata[1887]: Sep 04 23:53:47.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 23:53:47.811711 coreos-metadata[1887]: Sep 04 23:53:47.811 INFO Fetch successful Sep 4 23:53:47.811830 coreos-metadata[1887]: Sep 04 23:53:47.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 23:53:47.813344 coreos-metadata[1887]: Sep 04 23:53:47.813 INFO Fetch successful Sep 4 23:53:47.813423 coreos-metadata[1887]: Sep 04 23:53:47.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 23:53:47.814024 coreos-metadata[1887]: Sep 04 23:53:47.813 INFO Fetch successful Sep 4 23:53:47.814121 coreos-metadata[1887]: Sep 04 23:53:47.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 23:53:47.815627 coreos-metadata[1887]: Sep 04 23:53:47.815 INFO Fetch successful Sep 4 23:53:47.815930 coreos-metadata[1887]: Sep 04 23:53:47.815 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 23:53:47.818741 coreos-metadata[1887]: Sep 04 23:53:47.818 INFO Fetch successful Sep 4 23:53:47.902185 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:53:47.903203 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:53:47.911568 bash[1989]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:53:47.913212 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:53:47.926756 systemd[1]: Starting sshkeys.service... Sep 4 23:53:47.980755 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:53:47.989190 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:53:48.048375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 23:53:48.053484 dbus-daemon[1888]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 23:53:48.056853 dbus-daemon[1888]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1928 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 23:53:48.070617 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 23:53:48.081014 coreos-metadata[2018]: Sep 04 23:53:48.080 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:53:48.082833 coreos-metadata[2018]: Sep 04 23:53:48.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 23:53:48.083444 coreos-metadata[2018]: Sep 04 23:53:48.083 INFO Fetch successful Sep 4 23:53:48.083444 coreos-metadata[2018]: Sep 04 23:53:48.083 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 23:53:48.086155 coreos-metadata[2018]: Sep 04 23:53:48.085 INFO Fetch successful Sep 4 23:53:48.087353 unknown[2018]: wrote ssh authorized keys file for user: core Sep 4 23:53:48.092323 systemd-logind[1900]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 23:53:48.092358 systemd-logind[1900]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 23:53:48.092381 systemd-logind[1900]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:53:48.102729 systemd-logind[1900]: New seat seat0. Sep 4 23:53:48.104413 polkitd[2040]: Started polkitd version 121 Sep 4 23:53:48.110607 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:53:48.138036 locksmithd[1937]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:53:48.174592 update-ssh-keys[2049]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:53:48.177602 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:53:48.183974 systemd[1]: Finished sshkeys.service. Sep 4 23:53:48.220089 polkitd[2040]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 23:53:48.220183 polkitd[2040]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 23:53:48.227568 polkitd[2040]: Finished loading, compiling and executing 2 rules Sep 4 23:53:48.244559 dbus-daemon[1888]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 23:53:48.245617 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 23:53:48.246936 polkitd[2040]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 23:53:48.299982 systemd-hostnamed[1928]: Hostname set to (transient) Sep 4 23:53:48.300484 systemd-resolved[1831]: System hostname changed to 'ip-172-31-21-112'. Sep 4 23:53:48.403725 sshd_keygen[1938]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:53:48.437973 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:53:48.450648 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:53:48.464244 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:53:48.464700 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:53:48.477629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:53:48.495091 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:53:48.508644 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:53:48.517938 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:53:48.518835 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:53:48.550589 containerd[1925]: time="2025-09-04T23:53:48.549091634Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:53:48.589800 containerd[1925]: time="2025-09-04T23:53:48.589747872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.591906 containerd[1925]: time="2025-09-04T23:53:48.591866494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:53:48.592015 containerd[1925]: time="2025-09-04T23:53:48.591998575Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:53:48.592102 containerd[1925]: time="2025-09-04T23:53:48.592089264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:53:48.592322 containerd[1925]: time="2025-09-04T23:53:48.592305895Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:53:48.592404 containerd[1925]: time="2025-09-04T23:53:48.592388926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.592556 containerd[1925]: time="2025-09-04T23:53:48.592537577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:53:48.592620 containerd[1925]: time="2025-09-04T23:53:48.592607605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.592961 containerd[1925]: time="2025-09-04T23:53:48.592939642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593035 containerd[1925]: time="2025-09-04T23:53:48.593021833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593095 containerd[1925]: time="2025-09-04T23:53:48.593082497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593151 containerd[1925]: time="2025-09-04T23:53:48.593137266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593299 containerd[1925]: time="2025-09-04T23:53:48.593285338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593647 containerd[1925]: time="2025-09-04T23:53:48.593629490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:53:48.593953 containerd[1925]: time="2025-09-04T23:53:48.593931449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:53:48.594035 containerd[1925]: time="2025-09-04T23:53:48.594022084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:53:48.594184 containerd[1925]: time="2025-09-04T23:53:48.594167182Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:53:48.594313 containerd[1925]: time="2025-09-04T23:53:48.594298172Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:53:48.598409 containerd[1925]: time="2025-09-04T23:53:48.598387930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:53:48.598568 containerd[1925]: time="2025-09-04T23:53:48.598549368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:53:48.598688 containerd[1925]: time="2025-09-04T23:53:48.598673059Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:53:48.598759 containerd[1925]: time="2025-09-04T23:53:48.598747694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:53:48.598835 containerd[1925]: time="2025-09-04T23:53:48.598820904Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:53:48.599026 containerd[1925]: time="2025-09-04T23:53:48.599011530Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:53:48.599496 containerd[1925]: time="2025-09-04T23:53:48.599479158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:53:48.599700 containerd[1925]: time="2025-09-04T23:53:48.599685017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:53:48.599785 containerd[1925]: time="2025-09-04T23:53:48.599770892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599844600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599866985Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599888282Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599906117Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599926008Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599945613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599962332Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599978244Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.599992766Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.600016517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.600036513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.600055931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.600074944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600384 containerd[1925]: time="2025-09-04T23:53:48.600099333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600118598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600136956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600156224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600174669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600194950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600217376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600235557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600252772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600271886Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600297371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600314094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.600939 containerd[1925]: time="2025-09-04T23:53:48.600332626Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601360716Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601468725Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601486698Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601503121Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601541559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601560539Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601574875Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:53:48.602615 containerd[1925]: time="2025-09-04T23:53:48.601592128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:53:48.602938 containerd[1925]: time="2025-09-04T23:53:48.601966710Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:53:48.602938 containerd[1925]: time="2025-09-04T23:53:48.602049425Z" level=info msg="Connect containerd service" Sep 4 23:53:48.602938 containerd[1925]: time="2025-09-04T23:53:48.602092209Z" level=info msg="using legacy CRI server" Sep 4 23:53:48.602938 containerd[1925]: time="2025-09-04T23:53:48.602101668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:53:48.602938 containerd[1925]: time="2025-09-04T23:53:48.602252204Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:53:48.603719 containerd[1925]: time="2025-09-04T23:53:48.603693076Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:53:48.603929 containerd[1925]: time="2025-09-04T23:53:48.603901927Z" level=info msg="Start subscribing containerd event" Sep 4 23:53:48.604026 containerd[1925]: time="2025-09-04T23:53:48.604011744Z" level=info msg="Start recovering state" Sep 4 23:53:48.604149 containerd[1925]: time="2025-09-04T23:53:48.604135948Z" level=info msg="Start event monitor" Sep 4 23:53:48.604207 containerd[1925]: time="2025-09-04T23:53:48.604196556Z" level=info msg="Start snapshots syncer" Sep 4 23:53:48.604258 containerd[1925]: time="2025-09-04T23:53:48.604249173Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:53:48.604316 containerd[1925]: time="2025-09-04T23:53:48.604306129Z" level=info msg="Start streaming server" Sep 4 23:53:48.604838 containerd[1925]: time="2025-09-04T23:53:48.604820557Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:53:48.604984 containerd[1925]: time="2025-09-04T23:53:48.604944279Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:53:48.605154 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:53:48.607058 containerd[1925]: time="2025-09-04T23:53:48.606661622Z" level=info msg="containerd successfully booted in 0.058715s" Sep 4 23:53:48.634527 ntpd[1892]: bind(24) AF_INET6 fe80::464:37ff:fef1:429%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:53:48.634901 ntpd[1892]: 4 Sep 23:53:48 ntpd[1892]: bind(24) AF_INET6 fe80::464:37ff:fef1:429%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:53:48.634901 ntpd[1892]: 4 Sep 23:53:48 ntpd[1892]: unable to create socket on eth0 (6) for fe80::464:37ff:fef1:429%2#123 Sep 4 23:53:48.634901 ntpd[1892]: 4 Sep 23:53:48 ntpd[1892]: failed to init interface for address fe80::464:37ff:fef1:429%2 Sep 4 23:53:48.634573 ntpd[1892]: unable to create socket on eth0 (6) for fe80::464:37ff:fef1:429%2#123 Sep 4 23:53:48.634590 ntpd[1892]: failed to init interface for address fe80::464:37ff:fef1:429%2 Sep 4 23:53:48.766292 tar[1912]: linux-amd64/README.md Sep 4 23:53:48.776439 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:53:48.841871 systemd-networkd[1828]: eth0: Gained IPv6LL Sep 4 23:53:48.844856 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:53:48.845993 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:53:48.853767 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 23:53:48.856757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:53:48.865896 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:53:48.892144 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:53:48.934996 amazon-ssm-agent[2112]: Initializing new seelog logger Sep 4 23:53:48.935553 amazon-ssm-agent[2112]: New Seelog Logger Creation Complete Sep 4 23:53:48.935553 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.935553 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.935917 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 processing appconfig overrides Sep 4 23:53:48.936255 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.936255 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.936343 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 processing appconfig overrides Sep 4 23:53:48.936618 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.936618 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.936694 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 processing appconfig overrides Sep 4 23:53:48.937064 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO Proxy environment variables: Sep 4 23:53:48.941555 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.941555 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:53:48.941641 amazon-ssm-agent[2112]: 2025/09/04 23:53:48 processing appconfig overrides Sep 4 23:53:49.036762 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO no_proxy: Sep 4 23:53:49.135179 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO https_proxy: Sep 4 23:53:49.153379 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO http_proxy: Sep 4 23:53:49.153379 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO Checking if agent identity type OnPrem can be assumed Sep 4 23:53:49.153379 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO Checking if agent identity type EC2 can be assumed Sep 4 23:53:49.153379 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO Agent will take identity from EC2 Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [Registrar] Starting registrar module Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:49 INFO [EC2Identity] EC2 registration was successful. Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:49 INFO [CredentialRefresher] credentialRefresher has started Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:49 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 23:53:49.153589 amazon-ssm-agent[2112]: 2025-09-04 23:53:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 23:53:49.232363 amazon-ssm-agent[2112]: 2025-09-04 23:53:49 INFO [CredentialRefresher] Next credential rotation will be in 31.85832797755 minutes Sep 4 23:53:50.163631 amazon-ssm-agent[2112]: 2025-09-04 23:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 23:53:50.264894 amazon-ssm-agent[2112]: 2025-09-04 23:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2133) started Sep 4 23:53:50.365608 amazon-ssm-agent[2112]: 2025-09-04 23:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 23:53:51.161082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:53:51.161945 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:53:51.163617 systemd[1]: Startup finished in 580ms (kernel) + 6.060s (initrd) + 7.451s (userspace) = 14.092s. Sep 4 23:53:51.166811 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:53:51.634488 ntpd[1892]: Listen normally on 7 eth0 [fe80::464:37ff:fef1:429%2]:123 Sep 4 23:53:51.634850 ntpd[1892]: 4 Sep 23:53:51 ntpd[1892]: Listen normally on 7 eth0 [fe80::464:37ff:fef1:429%2]:123 Sep 4 23:53:51.725798 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:53:51.732800 systemd[1]: Started sshd@0-172.31.21.112:22-139.178.68.195:46576.service - OpenSSH per-connection server daemon (139.178.68.195:46576). Sep 4 23:53:51.942878 sshd[2159]: Accepted publickey for core from 139.178.68.195 port 46576 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:51.944872 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:51.950997 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:53:51.957815 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:53:51.970822 systemd-logind[1900]: New session 1 of user core. Sep 4 23:53:51.976495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:53:51.982776 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:53:51.987034 (systemd)[2163]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:53:51.989860 systemd-logind[1900]: New session c1 of user core. Sep 4 23:53:52.165025 systemd[2163]: Queued start job for default target default.target. Sep 4 23:53:52.169635 systemd[2163]: Created slice app.slice - User Application Slice. Sep 4 23:53:52.169676 systemd[2163]: Reached target paths.target - Paths. Sep 4 23:53:52.169733 systemd[2163]: Reached target timers.target - Timers. Sep 4 23:53:52.171950 systemd[2163]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:53:52.184929 systemd[2163]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:53:52.185072 systemd[2163]: Reached target sockets.target - Sockets. Sep 4 23:53:52.185243 systemd[2163]: Reached target basic.target - Basic System. Sep 4 23:53:52.185307 systemd[2163]: Reached target default.target - Main User Target. Sep 4 23:53:52.185344 systemd[2163]: Startup finished in 187ms. Sep 4 23:53:52.185451 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:53:52.189719 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:53:52.335851 systemd[1]: Started sshd@1-172.31.21.112:22-139.178.68.195:46586.service - OpenSSH per-connection server daemon (139.178.68.195:46586). Sep 4 23:53:52.359101 kubelet[2149]: E0904 23:53:52.359015 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:53:52.361182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:53:52.361332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:53:52.361732 systemd[1]: kubelet.service: Consumed 960ms CPU time, 265M memory peak. Sep 4 23:53:52.494770 sshd[2175]: Accepted publickey for core from 139.178.68.195 port 46586 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:52.496175 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:52.500840 systemd-logind[1900]: New session 2 of user core. Sep 4 23:53:52.511726 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:53:52.627727 sshd[2178]: Connection closed by 139.178.68.195 port 46586 Sep 4 23:53:52.628494 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:52.631795 systemd[1]: sshd@1-172.31.21.112:22-139.178.68.195:46586.service: Deactivated successfully. Sep 4 23:53:52.633543 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:53:52.634387 systemd-logind[1900]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:53:52.635430 systemd-logind[1900]: Removed session 2. Sep 4 23:53:52.661773 systemd[1]: Started sshd@2-172.31.21.112:22-139.178.68.195:46596.service - OpenSSH per-connection server daemon (139.178.68.195:46596). Sep 4 23:53:52.824565 sshd[2184]: Accepted publickey for core from 139.178.68.195 port 46596 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:52.825939 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:52.830602 systemd-logind[1900]: New session 3 of user core. Sep 4 23:53:52.833677 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:53:52.953689 sshd[2186]: Connection closed by 139.178.68.195 port 46596 Sep 4 23:53:52.954546 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:52.957574 systemd[1]: sshd@2-172.31.21.112:22-139.178.68.195:46596.service: Deactivated successfully. Sep 4 23:53:52.959981 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:53:52.960914 systemd-logind[1900]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:53:52.961699 systemd-logind[1900]: Removed session 3. Sep 4 23:53:52.989108 systemd[1]: Started sshd@3-172.31.21.112:22-139.178.68.195:46598.service - OpenSSH per-connection server daemon (139.178.68.195:46598). Sep 4 23:53:53.146847 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 46598 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:53.148122 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:53.153242 systemd-logind[1900]: New session 4 of user core. Sep 4 23:53:53.158697 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:53:53.278023 sshd[2194]: Connection closed by 139.178.68.195 port 46598 Sep 4 23:53:53.278818 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:53.281415 systemd[1]: sshd@3-172.31.21.112:22-139.178.68.195:46598.service: Deactivated successfully. Sep 4 23:53:53.283088 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:53:53.284175 systemd-logind[1900]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:53:53.285145 systemd-logind[1900]: Removed session 4. Sep 4 23:53:53.314767 systemd[1]: Started sshd@4-172.31.21.112:22-139.178.68.195:46600.service - OpenSSH per-connection server daemon (139.178.68.195:46600). Sep 4 23:53:53.475187 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 46600 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:53.476401 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:53.480402 systemd-logind[1900]: New session 5 of user core. Sep 4 23:53:53.490731 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:53:53.633016 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:53:53.633291 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:53:53.653048 sudo[2203]: pam_unix(sudo:session): session closed for user root Sep 4 23:53:53.675829 sshd[2202]: Connection closed by 139.178.68.195 port 46600 Sep 4 23:53:53.676784 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:53.680294 systemd[1]: sshd@4-172.31.21.112:22-139.178.68.195:46600.service: Deactivated successfully. Sep 4 23:53:53.682408 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:53:53.684007 systemd-logind[1900]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:53:53.685083 systemd-logind[1900]: Removed session 5. Sep 4 23:53:53.715784 systemd[1]: Started sshd@5-172.31.21.112:22-139.178.68.195:46612.service - OpenSSH per-connection server daemon (139.178.68.195:46612). Sep 4 23:53:53.881197 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 46612 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:53.882686 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:53.888096 systemd-logind[1900]: New session 6 of user core. Sep 4 23:53:53.894724 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:53:53.994348 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:53:53.994815 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:53:53.998685 sudo[2213]: pam_unix(sudo:session): session closed for user root Sep 4 23:53:54.004272 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:53:54.004765 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:53:54.019905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:53:54.051832 augenrules[2235]: No rules Sep 4 23:53:54.052648 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:53:54.052903 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:53:54.053849 sudo[2212]: pam_unix(sudo:session): session closed for user root Sep 4 23:53:54.076902 sshd[2211]: Connection closed by 139.178.68.195 port 46612 Sep 4 23:53:54.077433 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:54.080153 systemd[1]: sshd@5-172.31.21.112:22-139.178.68.195:46612.service: Deactivated successfully. Sep 4 23:53:54.081805 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:53:54.083003 systemd-logind[1900]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:53:54.083867 systemd-logind[1900]: Removed session 6. Sep 4 23:53:54.111787 systemd[1]: Started sshd@6-172.31.21.112:22-139.178.68.195:46624.service - OpenSSH per-connection server daemon (139.178.68.195:46624). Sep 4 23:53:54.267645 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 46624 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:53:54.269243 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:54.273425 systemd-logind[1900]: New session 7 of user core. Sep 4 23:53:54.279653 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:53:54.376693 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:53:54.376963 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:53:56.001233 systemd-resolved[1831]: Clock change detected. Flushing caches. Sep 4 23:53:56.284159 (dockerd)[2264]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:53:56.284516 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:53:56.878495 dockerd[2264]: time="2025-09-04T23:53:56.878437861Z" level=info msg="Starting up" Sep 4 23:53:57.113220 dockerd[2264]: time="2025-09-04T23:53:57.113177543Z" level=info msg="Loading containers: start." Sep 4 23:53:57.329809 kernel: Initializing XFRM netlink socket Sep 4 23:53:57.369578 (udev-worker)[2288]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:53:57.439298 systemd-networkd[1828]: docker0: Link UP Sep 4 23:53:57.473234 dockerd[2264]: time="2025-09-04T23:53:57.473184992Z" level=info msg="Loading containers: done." Sep 4 23:53:57.493399 dockerd[2264]: time="2025-09-04T23:53:57.493341175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:53:57.493606 dockerd[2264]: time="2025-09-04T23:53:57.493460712Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:53:57.493668 dockerd[2264]: time="2025-09-04T23:53:57.493610540Z" level=info msg="Daemon has completed initialization" Sep 4 23:53:57.529681 dockerd[2264]: time="2025-09-04T23:53:57.529619078Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:53:57.529996 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:53:58.882270 containerd[1925]: time="2025-09-04T23:53:58.882225758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:53:59.386240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450583960.mount: Deactivated successfully. Sep 4 23:54:00.824046 containerd[1925]: time="2025-09-04T23:54:00.823999374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:00.825026 containerd[1925]: time="2025-09-04T23:54:00.824960923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 23:54:00.825873 containerd[1925]: time="2025-09-04T23:54:00.825845735Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:00.829665 containerd[1925]: time="2025-09-04T23:54:00.828604299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:00.829665 containerd[1925]: time="2025-09-04T23:54:00.829431608Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.947169033s" Sep 4 23:54:00.829665 containerd[1925]: time="2025-09-04T23:54:00.829468595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:54:00.830134 containerd[1925]: time="2025-09-04T23:54:00.830109388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:54:02.667291 containerd[1925]: time="2025-09-04T23:54:02.667238043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:02.672734 containerd[1925]: time="2025-09-04T23:54:02.671931610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 23:54:02.675370 containerd[1925]: time="2025-09-04T23:54:02.675345806Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:02.679976 containerd[1925]: time="2025-09-04T23:54:02.679937399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:02.680986 containerd[1925]: time="2025-09-04T23:54:02.680956907Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.850819394s" Sep 4 23:54:02.681051 containerd[1925]: time="2025-09-04T23:54:02.680990691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:54:02.681520 containerd[1925]: time="2025-09-04T23:54:02.681387957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:54:03.978935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:54:03.987169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:04.133531 containerd[1925]: time="2025-09-04T23:54:04.133472623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:04.141052 containerd[1925]: time="2025-09-04T23:54:04.140982615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 23:54:04.152880 containerd[1925]: time="2025-09-04T23:54:04.152765924Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:04.167489 containerd[1925]: time="2025-09-04T23:54:04.167420691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:04.168427 containerd[1925]: time="2025-09-04T23:54:04.168393872Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.486981278s" Sep 4 23:54:04.168504 containerd[1925]: time="2025-09-04T23:54:04.168431152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:54:04.169463 containerd[1925]: time="2025-09-04T23:54:04.169425339Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:54:04.378350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:04.382569 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:54:04.425626 kubelet[2524]: E0904 23:54:04.425554 2524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:54:04.429171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:54:04.429313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:54:04.429597 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110M memory peak. Sep 4 23:54:05.100177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501673692.mount: Deactivated successfully. Sep 4 23:54:05.556285 containerd[1925]: time="2025-09-04T23:54:05.556135928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:05.558319 containerd[1925]: time="2025-09-04T23:54:05.558167544Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 23:54:05.560963 containerd[1925]: time="2025-09-04T23:54:05.560937641Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:05.564101 containerd[1925]: time="2025-09-04T23:54:05.563555518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:05.564101 containerd[1925]: time="2025-09-04T23:54:05.563988830Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.394538338s" Sep 4 23:54:05.564101 containerd[1925]: time="2025-09-04T23:54:05.564015284Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:54:05.564732 containerd[1925]: time="2025-09-04T23:54:05.564709559Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:54:06.014990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372874883.mount: Deactivated successfully. Sep 4 23:54:06.879211 containerd[1925]: time="2025-09-04T23:54:06.879158089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:06.881075 containerd[1925]: time="2025-09-04T23:54:06.881016708Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 23:54:06.882768 containerd[1925]: time="2025-09-04T23:54:06.882327007Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:06.885708 containerd[1925]: time="2025-09-04T23:54:06.885676423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:06.886898 containerd[1925]: time="2025-09-04T23:54:06.886867079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.322127891s" Sep 4 23:54:06.887018 containerd[1925]: time="2025-09-04T23:54:06.886998916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:54:06.888031 containerd[1925]: time="2025-09-04T23:54:06.888006903Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:54:07.358369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844388002.mount: Deactivated successfully. Sep 4 23:54:07.361595 containerd[1925]: time="2025-09-04T23:54:07.361559261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:07.362439 containerd[1925]: time="2025-09-04T23:54:07.362388938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:54:07.364683 containerd[1925]: time="2025-09-04T23:54:07.363519450Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:07.365808 containerd[1925]: time="2025-09-04T23:54:07.365644385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:07.366649 containerd[1925]: time="2025-09-04T23:54:07.366217745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 478.181693ms" Sep 4 23:54:07.366649 containerd[1925]: time="2025-09-04T23:54:07.366245033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:54:07.367117 containerd[1925]: time="2025-09-04T23:54:07.367092762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:54:07.827268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326683493.mount: Deactivated successfully. Sep 4 23:54:10.113996 containerd[1925]: time="2025-09-04T23:54:10.113937032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:10.123238 containerd[1925]: time="2025-09-04T23:54:10.122993050Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 23:54:10.148744 containerd[1925]: time="2025-09-04T23:54:10.148647177Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:10.158343 containerd[1925]: time="2025-09-04T23:54:10.158247041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:10.159333 containerd[1925]: time="2025-09-04T23:54:10.159189741Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.792070653s" Sep 4 23:54:10.159333 containerd[1925]: time="2025-09-04T23:54:10.159222301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:54:12.992008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:12.992730 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110M memory peak. Sep 4 23:54:12.999095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:13.036540 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-7.scope)... Sep 4 23:54:13.036558 systemd[1]: Reloading... Sep 4 23:54:13.178809 zram_generator::config[2723]: No configuration found. Sep 4 23:54:13.332252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:54:13.446820 systemd[1]: Reloading finished in 409 ms. Sep 4 23:54:13.497384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:13.513340 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:54:13.513700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:13.514361 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:54:13.514650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:13.514695 systemd[1]: kubelet.service: Consumed 132ms CPU time, 98.2M memory peak. Sep 4 23:54:13.520021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:13.708360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:13.712745 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:54:13.773181 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:54:13.773181 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:54:13.773181 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:54:13.773181 kubelet[2789]: I0904 23:54:13.772734 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:54:14.093902 kubelet[2789]: I0904 23:54:14.093811 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:54:14.093902 kubelet[2789]: I0904 23:54:14.093875 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:54:14.094380 kubelet[2789]: I0904 23:54:14.094164 2789 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:54:14.140229 kubelet[2789]: I0904 23:54:14.140115 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:54:14.141003 kubelet[2789]: E0904 23:54:14.140957 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:14.162635 kubelet[2789]: E0904 23:54:14.162591 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:54:14.162635 kubelet[2789]: I0904 23:54:14.162630 2789 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:54:14.167019 kubelet[2789]: I0904 23:54:14.166996 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:54:14.169198 kubelet[2789]: I0904 23:54:14.169150 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:54:14.169371 kubelet[2789]: I0904 23:54:14.169198 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:54:14.172701 kubelet[2789]: I0904 23:54:14.172674 2789 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:54:14.172701 kubelet[2789]: I0904 23:54:14.172701 2789 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:54:14.172857 kubelet[2789]: I0904 23:54:14.172842 2789 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:54:14.178688 kubelet[2789]: I0904 23:54:14.178664 2789 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:54:14.178779 kubelet[2789]: I0904 23:54:14.178699 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:54:14.178779 kubelet[2789]: I0904 23:54:14.178725 2789 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:54:14.178779 kubelet[2789]: I0904 23:54:14.178739 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:54:14.187200 kubelet[2789]: W0904 23:54:14.186640 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:14.187200 kubelet[2789]: E0904 23:54:14.186685 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:14.187200 kubelet[2789]: W0904 23:54:14.186741 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-112&limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:14.187200 kubelet[2789]: E0904 23:54:14.186766 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-112&limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:14.189018 kubelet[2789]: I0904 23:54:14.188935 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:54:14.193277 kubelet[2789]: I0904 23:54:14.193235 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:54:14.193376 kubelet[2789]: W0904 23:54:14.193295 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:54:14.194329 kubelet[2789]: I0904 23:54:14.194173 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:54:14.194329 kubelet[2789]: I0904 23:54:14.194203 2789 server.go:1287] "Started kubelet" Sep 4 23:54:14.197522 kubelet[2789]: I0904 23:54:14.197061 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:54:14.199450 kubelet[2789]: I0904 23:54:14.198704 2789 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:54:14.202581 kubelet[2789]: I0904 23:54:14.202510 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:54:14.202882 kubelet[2789]: I0904 23:54:14.202827 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:54:14.202932 kubelet[2789]: I0904 23:54:14.202886 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:54:14.208840 kubelet[2789]: E0904 23:54:14.204138 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.112:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-112.186239866fee04a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-112,UID:ip-172-31-21-112,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-112,},FirstTimestamp:2025-09-04 23:54:14.194185384 +0000 UTC m=+0.478469984,LastTimestamp:2025-09-04 23:54:14.194185384 +0000 UTC m=+0.478469984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-112,}" Sep 4 23:54:14.209450 kubelet[2789]: I0904 23:54:14.209401 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:54:14.211105 kubelet[2789]: I0904 23:54:14.210584 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:54:14.211105 kubelet[2789]: E0904 23:54:14.210797 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-112\" not found" Sep 4 23:54:14.213182 kubelet[2789]: I0904 23:54:14.213168 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:54:14.213293 kubelet[2789]: I0904 23:54:14.213284 2789 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:54:14.216547 kubelet[2789]: E0904 23:54:14.216512 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": dial tcp 172.31.21.112:6443: connect: connection refused" interval="200ms" Sep 4 23:54:14.216925 kubelet[2789]: I0904 23:54:14.216775 2789 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:54:14.217022 kubelet[2789]: I0904 23:54:14.216996 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:54:14.219533 kubelet[2789]: W0904 23:54:14.219461 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:14.219533 kubelet[2789]: E0904 23:54:14.219505 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:14.222893 kubelet[2789]: E0904 23:54:14.219921 2789 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:54:14.222893 kubelet[2789]: I0904 23:54:14.220335 2789 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:54:14.243171 kubelet[2789]: I0904 23:54:14.243031 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:54:14.244606 kubelet[2789]: I0904 23:54:14.244589 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:54:14.244809 kubelet[2789]: I0904 23:54:14.244685 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:54:14.244809 kubelet[2789]: I0904 23:54:14.244709 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:54:14.244809 kubelet[2789]: I0904 23:54:14.244715 2789 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:54:14.244809 kubelet[2789]: E0904 23:54:14.244759 2789 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:54:14.249417 kubelet[2789]: W0904 23:54:14.249369 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:14.249502 kubelet[2789]: E0904 23:54:14.249419 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:14.249857 kubelet[2789]: I0904 23:54:14.249841 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:54:14.249857 kubelet[2789]: I0904 23:54:14.249853 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:54:14.249937 kubelet[2789]: I0904 23:54:14.249882 2789 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:54:14.253760 kubelet[2789]: I0904 23:54:14.253725 2789 policy_none.go:49] "None policy: Start" Sep 4 23:54:14.253760 kubelet[2789]: I0904 23:54:14.253745 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:54:14.253760 kubelet[2789]: I0904 23:54:14.253755 2789 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:54:14.261592 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:54:14.272460 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:54:14.275323 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:54:14.280917 kubelet[2789]: I0904 23:54:14.280495 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:54:14.280917 kubelet[2789]: I0904 23:54:14.280654 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:54:14.280917 kubelet[2789]: I0904 23:54:14.280663 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:54:14.280917 kubelet[2789]: I0904 23:54:14.280871 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:54:14.282267 kubelet[2789]: E0904 23:54:14.282193 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:54:14.282267 kubelet[2789]: E0904 23:54:14.282231 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-112\" not found" Sep 4 23:54:14.354479 systemd[1]: Created slice kubepods-burstable-poda9c1e961a98502c0d1a0eb61e377d724.slice - libcontainer container kubepods-burstable-poda9c1e961a98502c0d1a0eb61e377d724.slice. Sep 4 23:54:14.370303 kubelet[2789]: E0904 23:54:14.370282 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:14.372146 systemd[1]: Created slice kubepods-burstable-pod75367ef79a2e0f0e81d8dac62045571f.slice - libcontainer container kubepods-burstable-pod75367ef79a2e0f0e81d8dac62045571f.slice. Sep 4 23:54:14.374257 kubelet[2789]: E0904 23:54:14.374239 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:14.376532 systemd[1]: Created slice kubepods-burstable-pod441a2c8587335e8adeb1c04bd76571ba.slice - libcontainer container kubepods-burstable-pod441a2c8587335e8adeb1c04bd76571ba.slice. Sep 4 23:54:14.378080 kubelet[2789]: E0904 23:54:14.378060 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:14.382414 kubelet[2789]: I0904 23:54:14.382398 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:14.382876 kubelet[2789]: E0904 23:54:14.382840 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.112:6443/api/v1/nodes\": dial tcp 172.31.21.112:6443: connect: connection refused" node="ip-172-31-21-112" Sep 4 23:54:14.415275 kubelet[2789]: I0904 23:54:14.415234 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:14.415275 kubelet[2789]: I0904 23:54:14.415276 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75367ef79a2e0f0e81d8dac62045571f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-112\" (UID: \"75367ef79a2e0f0e81d8dac62045571f\") " pod="kube-system/kube-scheduler-ip-172-31-21-112" Sep 4 23:54:14.415275 kubelet[2789]: I0904 23:54:14.415294 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-ca-certs\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:14.415275 kubelet[2789]: I0904 23:54:14.415309 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:14.415275 kubelet[2789]: I0904 23:54:14.415347 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:14.415600 kubelet[2789]: I0904 23:54:14.415375 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:14.415600 kubelet[2789]: I0904 23:54:14.415395 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:14.415600 kubelet[2789]: I0904 23:54:14.415410 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:14.415600 kubelet[2789]: I0904 23:54:14.415427 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:14.417662 kubelet[2789]: E0904 23:54:14.417636 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": dial tcp 172.31.21.112:6443: connect: connection refused" interval="400ms" Sep 4 23:54:14.584771 kubelet[2789]: I0904 23:54:14.584739 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:14.585045 kubelet[2789]: E0904 23:54:14.585023 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.112:6443/api/v1/nodes\": dial tcp 172.31.21.112:6443: connect: connection refused" node="ip-172-31-21-112" Sep 4 23:54:14.672574 containerd[1925]: time="2025-09-04T23:54:14.672514904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-112,Uid:a9c1e961a98502c0d1a0eb61e377d724,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:14.676190 containerd[1925]: time="2025-09-04T23:54:14.676153603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-112,Uid:75367ef79a2e0f0e81d8dac62045571f,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:14.678974 containerd[1925]: time="2025-09-04T23:54:14.678947184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-112,Uid:441a2c8587335e8adeb1c04bd76571ba,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:14.818953 kubelet[2789]: E0904 23:54:14.818916 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": dial tcp 172.31.21.112:6443: connect: connection refused" interval="800ms" Sep 4 23:54:14.987339 kubelet[2789]: I0904 23:54:14.986976 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:14.987339 kubelet[2789]: E0904 23:54:14.987254 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.112:6443/api/v1/nodes\": dial tcp 172.31.21.112:6443: connect: connection refused" node="ip-172-31-21-112" Sep 4 23:54:15.058312 kubelet[2789]: W0904 23:54:15.058276 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:15.058442 kubelet[2789]: E0904 23:54:15.058317 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:15.094329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058967614.mount: Deactivated successfully. Sep 4 23:54:15.106806 containerd[1925]: time="2025-09-04T23:54:15.105151244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:54:15.106806 containerd[1925]: time="2025-09-04T23:54:15.105715123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:54:15.107000 containerd[1925]: time="2025-09-04T23:54:15.106970447Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:54:15.107618 containerd[1925]: time="2025-09-04T23:54:15.107595547Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:54:15.108133 containerd[1925]: time="2025-09-04T23:54:15.107989863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:54:15.110689 containerd[1925]: time="2025-09-04T23:54:15.110657849Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:54:15.111628 containerd[1925]: time="2025-09-04T23:54:15.111570629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:54:15.112328 containerd[1925]: time="2025-09-04T23:54:15.112292060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:54:15.114819 containerd[1925]: time="2025-09-04T23:54:15.113049565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 436.803107ms" Sep 4 23:54:15.116263 containerd[1925]: time="2025-09-04T23:54:15.116189176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 440.643936ms" Sep 4 23:54:15.116908 containerd[1925]: time="2025-09-04T23:54:15.116855282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 437.847648ms" Sep 4 23:54:15.334890 kubelet[2789]: W0904 23:54:15.332533 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:15.334890 kubelet[2789]: E0904 23:54:15.332594 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:15.354767 containerd[1925]: time="2025-09-04T23:54:15.354682335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:15.354767 containerd[1925]: time="2025-09-04T23:54:15.354724854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:15.356131 containerd[1925]: time="2025-09-04T23:54:15.355676208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.356710 containerd[1925]: time="2025-09-04T23:54:15.356513025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:15.356710 containerd[1925]: time="2025-09-04T23:54:15.356565571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:15.356710 containerd[1925]: time="2025-09-04T23:54:15.356579836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.356857 containerd[1925]: time="2025-09-04T23:54:15.356371117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.357013 containerd[1925]: time="2025-09-04T23:54:15.356975136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.357654 containerd[1925]: time="2025-09-04T23:54:15.357590178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:15.357711 containerd[1925]: time="2025-09-04T23:54:15.357681040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:15.357769 containerd[1925]: time="2025-09-04T23:54:15.357733662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.357949 containerd[1925]: time="2025-09-04T23:54:15.357854687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:15.372286 kubelet[2789]: W0904 23:54:15.372216 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:15.372389 kubelet[2789]: E0904 23:54:15.372299 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:15.384044 systemd[1]: Started cri-containerd-78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88.scope - libcontainer container 78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88. Sep 4 23:54:15.392627 systemd[1]: Started cri-containerd-423cb28f6d205cc7e7601997fbdf8b423990e38872cad9578163626d3f0cd930.scope - libcontainer container 423cb28f6d205cc7e7601997fbdf8b423990e38872cad9578163626d3f0cd930. Sep 4 23:54:15.400268 systemd[1]: Started cri-containerd-003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4.scope - libcontainer container 003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4. Sep 4 23:54:15.458986 containerd[1925]: time="2025-09-04T23:54:15.458952682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-112,Uid:441a2c8587335e8adeb1c04bd76571ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"423cb28f6d205cc7e7601997fbdf8b423990e38872cad9578163626d3f0cd930\"" Sep 4 23:54:15.463454 containerd[1925]: time="2025-09-04T23:54:15.463135145Z" level=info msg="CreateContainer within sandbox \"423cb28f6d205cc7e7601997fbdf8b423990e38872cad9578163626d3f0cd930\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:54:15.489085 containerd[1925]: time="2025-09-04T23:54:15.489048087Z" level=info msg="CreateContainer within sandbox \"423cb28f6d205cc7e7601997fbdf8b423990e38872cad9578163626d3f0cd930\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1ff6117db931b7493640ef23031cd29a6a8a0d6c9a66610b16689e3ca1cb34b\"" Sep 4 23:54:15.490355 containerd[1925]: time="2025-09-04T23:54:15.490249520Z" level=info msg="StartContainer for \"f1ff6117db931b7493640ef23031cd29a6a8a0d6c9a66610b16689e3ca1cb34b\"" Sep 4 23:54:15.509707 containerd[1925]: time="2025-09-04T23:54:15.509670485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-112,Uid:a9c1e961a98502c0d1a0eb61e377d724,Namespace:kube-system,Attempt:0,} returns sandbox id \"003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4\"" Sep 4 23:54:15.518695 containerd[1925]: time="2025-09-04T23:54:15.516915340Z" level=info msg="CreateContainer within sandbox \"003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:54:15.525313 containerd[1925]: time="2025-09-04T23:54:15.525276325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-112,Uid:75367ef79a2e0f0e81d8dac62045571f,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88\"" Sep 4 23:54:15.528940 containerd[1925]: time="2025-09-04T23:54:15.528907908Z" level=info msg="CreateContainer within sandbox \"78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:54:15.540267 containerd[1925]: time="2025-09-04T23:54:15.540233045Z" level=info msg="CreateContainer within sandbox \"003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5\"" Sep 4 23:54:15.540821 containerd[1925]: time="2025-09-04T23:54:15.540676612Z" level=info msg="StartContainer for \"79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5\"" Sep 4 23:54:15.543977 systemd[1]: Started cri-containerd-f1ff6117db931b7493640ef23031cd29a6a8a0d6c9a66610b16689e3ca1cb34b.scope - libcontainer container f1ff6117db931b7493640ef23031cd29a6a8a0d6c9a66610b16689e3ca1cb34b. Sep 4 23:54:15.556229 containerd[1925]: time="2025-09-04T23:54:15.556162805Z" level=info msg="CreateContainer within sandbox \"78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91\"" Sep 4 23:54:15.557175 containerd[1925]: time="2025-09-04T23:54:15.557127282Z" level=info msg="StartContainer for \"a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91\"" Sep 4 23:54:15.606333 systemd[1]: Started cri-containerd-79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5.scope - libcontainer container 79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5. Sep 4 23:54:15.610535 systemd[1]: Started cri-containerd-a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91.scope - libcontainer container a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91. Sep 4 23:54:15.619715 kubelet[2789]: E0904 23:54:15.619649 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": dial tcp 172.31.21.112:6443: connect: connection refused" interval="1.6s" Sep 4 23:54:15.629973 containerd[1925]: time="2025-09-04T23:54:15.629842293Z" level=info msg="StartContainer for \"f1ff6117db931b7493640ef23031cd29a6a8a0d6c9a66610b16689e3ca1cb34b\" returns successfully" Sep 4 23:54:15.689511 containerd[1925]: time="2025-09-04T23:54:15.689384770Z" level=info msg="StartContainer for \"a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91\" returns successfully" Sep 4 23:54:15.692561 containerd[1925]: time="2025-09-04T23:54:15.692458846Z" level=info msg="StartContainer for \"79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5\" returns successfully" Sep 4 23:54:15.772034 kubelet[2789]: W0904 23:54:15.771957 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-112&limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:15.772034 kubelet[2789]: E0904 23:54:15.772086 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-112&limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:15.790539 kubelet[2789]: I0904 23:54:15.790062 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:15.790539 kubelet[2789]: E0904 23:54:15.790388 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.112:6443/api/v1/nodes\": dial tcp 172.31.21.112:6443: connect: connection refused" node="ip-172-31-21-112" Sep 4 23:54:16.264875 kubelet[2789]: E0904 23:54:16.264427 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:16.268167 kubelet[2789]: E0904 23:54:16.267757 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:16.271605 kubelet[2789]: E0904 23:54:16.271384 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:16.340754 kubelet[2789]: E0904 23:54:16.340718 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:17.216401 kubelet[2789]: W0904 23:54:17.216360 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:17.216702 kubelet[2789]: E0904 23:54:17.216413 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:17.221095 kubelet[2789]: E0904 23:54:17.221056 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": dial tcp 172.31.21.112:6443: connect: connection refused" interval="3.2s" Sep 4 23:54:17.271326 kubelet[2789]: E0904 23:54:17.271291 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:17.272506 kubelet[2789]: E0904 23:54:17.272484 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:17.338821 kubelet[2789]: W0904 23:54:17.338668 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.112:6443: connect: connection refused Sep 4 23:54:17.338821 kubelet[2789]: E0904 23:54:17.338747 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:54:17.392948 kubelet[2789]: I0904 23:54:17.392914 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:17.393262 kubelet[2789]: E0904 23:54:17.393230 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.112:6443/api/v1/nodes\": dial tcp 172.31.21.112:6443: connect: connection refused" node="ip-172-31-21-112" Sep 4 23:54:19.073825 kubelet[2789]: E0904 23:54:19.073768 2789 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-21-112" not found Sep 4 23:54:19.185452 kubelet[2789]: I0904 23:54:19.185411 2789 apiserver.go:52] "Watching apiserver" Sep 4 23:54:19.214023 kubelet[2789]: I0904 23:54:19.213977 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:54:19.424298 kubelet[2789]: E0904 23:54:19.424265 2789 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-21-112" not found Sep 4 23:54:19.517928 kubelet[2789]: E0904 23:54:19.517857 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:19.697814 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 23:54:19.859595 kubelet[2789]: E0904 23:54:19.859564 2789 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-21-112" not found Sep 4 23:54:20.426450 kubelet[2789]: E0904 23:54:20.426423 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-112\" not found" node="ip-172-31-21-112" Sep 4 23:54:20.595862 kubelet[2789]: I0904 23:54:20.595205 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:20.604132 kubelet[2789]: I0904 23:54:20.603334 2789 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-112" Sep 4 23:54:20.604132 kubelet[2789]: E0904 23:54:20.603369 2789 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-112\": node \"ip-172-31-21-112\" not found" Sep 4 23:54:20.612058 kubelet[2789]: I0904 23:54:20.611845 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:20.620778 kubelet[2789]: I0904 23:54:20.620743 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-112" Sep 4 23:54:20.627413 kubelet[2789]: I0904 23:54:20.625871 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:20.878234 systemd[1]: Reload requested from client PID 3068 ('systemctl') (unit session-7.scope)... Sep 4 23:54:20.878249 systemd[1]: Reloading... Sep 4 23:54:20.979814 zram_generator::config[3113]: No configuration found. Sep 4 23:54:21.101559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:54:21.230377 systemd[1]: Reloading finished in 351 ms. Sep 4 23:54:21.253747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:21.264430 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:54:21.264654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:21.264711 systemd[1]: kubelet.service: Consumed 782ms CPU time, 127.7M memory peak. Sep 4 23:54:21.271145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:54:21.571282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:54:21.575951 (kubelet)[3173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:54:21.627374 kubelet[3173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:54:21.627374 kubelet[3173]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:54:21.627374 kubelet[3173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:54:21.627721 kubelet[3173]: I0904 23:54:21.627438 3173 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:54:21.635212 kubelet[3173]: I0904 23:54:21.635186 3173 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:54:21.635809 kubelet[3173]: I0904 23:54:21.635322 3173 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:54:21.635809 kubelet[3173]: I0904 23:54:21.635537 3173 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:54:21.636766 kubelet[3173]: I0904 23:54:21.636750 3173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:54:21.641345 kubelet[3173]: I0904 23:54:21.641322 3173 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:54:21.645393 kubelet[3173]: E0904 23:54:21.645361 3173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:54:21.645393 kubelet[3173]: I0904 23:54:21.645387 3173 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:54:21.648130 kubelet[3173]: I0904 23:54:21.648101 3173 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:54:21.648359 kubelet[3173]: I0904 23:54:21.648331 3173 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:54:21.648531 kubelet[3173]: I0904 23:54:21.648358 3173 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:54:21.648531 kubelet[3173]: I0904 23:54:21.648522 3173 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:54:21.648531 kubelet[3173]: I0904 23:54:21.648531 3173 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:54:21.652014 kubelet[3173]: I0904 23:54:21.651986 3173 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:54:21.652164 kubelet[3173]: I0904 23:54:21.652153 3173 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:54:21.652240 kubelet[3173]: I0904 23:54:21.652209 3173 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:54:21.652240 kubelet[3173]: I0904 23:54:21.652231 3173 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:54:21.652240 kubelet[3173]: I0904 23:54:21.652239 3173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:54:21.655669 kubelet[3173]: I0904 23:54:21.655557 3173 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:54:21.657652 kubelet[3173]: I0904 23:54:21.657631 3173 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:54:21.664988 kubelet[3173]: I0904 23:54:21.664883 3173 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:54:21.664988 kubelet[3173]: I0904 23:54:21.664913 3173 server.go:1287] "Started kubelet" Sep 4 23:54:21.665592 kubelet[3173]: I0904 23:54:21.665523 3173 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:54:21.667071 kubelet[3173]: I0904 23:54:21.667055 3173 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:54:21.667232 kubelet[3173]: I0904 23:54:21.667132 3173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:54:21.668709 kubelet[3173]: I0904 23:54:21.668404 3173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:54:21.668709 kubelet[3173]: I0904 23:54:21.668583 3173 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:54:21.671198 kubelet[3173]: I0904 23:54:21.671093 3173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:54:21.674966 kubelet[3173]: I0904 23:54:21.674094 3173 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:54:21.677767 kubelet[3173]: I0904 23:54:21.676939 3173 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:54:21.678662 kubelet[3173]: I0904 23:54:21.678645 3173 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:54:21.689662 kubelet[3173]: I0904 23:54:21.689411 3173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:54:21.692769 kubelet[3173]: I0904 23:54:21.692731 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:54:21.694830 kubelet[3173]: I0904 23:54:21.693841 3173 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:54:21.694830 kubelet[3173]: I0904 23:54:21.693859 3173 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:54:21.696086 kubelet[3173]: I0904 23:54:21.695758 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:54:21.697286 kubelet[3173]: I0904 23:54:21.696608 3173 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:54:21.697286 kubelet[3173]: I0904 23:54:21.696635 3173 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:54:21.697286 kubelet[3173]: I0904 23:54:21.696647 3173 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:54:21.697286 kubelet[3173]: E0904 23:54:21.696689 3173 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:54:21.708242 sudo[3192]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:54:21.708918 sudo[3192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:54:21.747073 kubelet[3173]: I0904 23:54:21.747049 3173 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:54:21.747073 kubelet[3173]: I0904 23:54:21.747066 3173 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:54:21.747073 kubelet[3173]: I0904 23:54:21.747083 3173 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:54:21.747269 kubelet[3173]: I0904 23:54:21.747233 3173 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:54:21.747269 kubelet[3173]: I0904 23:54:21.747242 3173 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:54:21.747269 kubelet[3173]: I0904 23:54:21.747258 3173 policy_none.go:49] "None policy: Start" Sep 4 23:54:21.747269 kubelet[3173]: I0904 23:54:21.747267 3173 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:54:21.747361 kubelet[3173]: I0904 23:54:21.747275 3173 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:54:21.747393 kubelet[3173]: I0904 23:54:21.747363 3173 state_mem.go:75] "Updated machine memory state" Sep 4 23:54:21.753541 kubelet[3173]: I0904 23:54:21.753263 3173 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:54:21.753541 kubelet[3173]: I0904 23:54:21.753414 3173 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:54:21.753541 kubelet[3173]: I0904 23:54:21.753424 3173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:54:21.759181 kubelet[3173]: I0904 23:54:21.758515 3173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:54:21.760563 kubelet[3173]: E0904 23:54:21.760513 3173 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:54:21.797228 kubelet[3173]: I0904 23:54:21.797194 3173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:21.798619 kubelet[3173]: I0904 23:54:21.798510 3173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-112" Sep 4 23:54:21.798988 kubelet[3173]: I0904 23:54:21.798978 3173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.807804 kubelet[3173]: E0904 23:54:21.807745 3173 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-112\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:21.810233 kubelet[3173]: E0904 23:54:21.810204 3173 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-112\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-112" Sep 4 23:54:21.810505 kubelet[3173]: E0904 23:54:21.810488 3173 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-112\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.862170 kubelet[3173]: I0904 23:54:21.862090 3173 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-112" Sep 4 23:54:21.872900 kubelet[3173]: I0904 23:54:21.872872 3173 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-112" Sep 4 23:54:21.873046 kubelet[3173]: I0904 23:54:21.872950 3173 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-112" Sep 4 23:54:21.882948 kubelet[3173]: I0904 23:54:21.882875 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.883072 kubelet[3173]: I0904 23:54:21.882994 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.883072 kubelet[3173]: I0904 23:54:21.883034 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75367ef79a2e0f0e81d8dac62045571f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-112\" (UID: \"75367ef79a2e0f0e81d8dac62045571f\") " pod="kube-system/kube-scheduler-ip-172-31-21-112" Sep 4 23:54:21.883194 kubelet[3173]: I0904 23:54:21.883094 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-ca-certs\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:21.883194 kubelet[3173]: I0904 23:54:21.883119 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:21.883194 kubelet[3173]: I0904 23:54:21.883190 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.883320 kubelet[3173]: I0904 23:54:21.883246 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:21.883320 kubelet[3173]: I0904 23:54:21.883272 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/441a2c8587335e8adeb1c04bd76571ba-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-112\" (UID: \"441a2c8587335e8adeb1c04bd76571ba\") " pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:21.884989 kubelet[3173]: I0904 23:54:21.884660 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9c1e961a98502c0d1a0eb61e377d724-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-112\" (UID: \"a9c1e961a98502c0d1a0eb61e377d724\") " pod="kube-system/kube-controller-manager-ip-172-31-21-112" Sep 4 23:54:22.486277 sudo[3192]: pam_unix(sudo:session): session closed for user root Sep 4 23:54:22.653609 kubelet[3173]: I0904 23:54:22.653573 3173 apiserver.go:52] "Watching apiserver" Sep 4 23:54:22.677064 kubelet[3173]: I0904 23:54:22.677025 3173 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:54:22.729140 kubelet[3173]: I0904 23:54:22.729113 3173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:22.743650 kubelet[3173]: E0904 23:54:22.743546 3173 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-112\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-112" Sep 4 23:54:22.780806 kubelet[3173]: I0904 23:54:22.779851 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-112" podStartSLOduration=2.7798313930000003 podStartE2EDuration="2.779831393s" podCreationTimestamp="2025-09-04 23:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:22.779421866 +0000 UTC m=+1.195762410" watchObservedRunningTime="2025-09-04 23:54:22.779831393 +0000 UTC m=+1.196171932" Sep 4 23:54:22.780806 kubelet[3173]: I0904 23:54:22.780540 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-112" podStartSLOduration=2.780527409 podStartE2EDuration="2.780527409s" podCreationTimestamp="2025-09-04 23:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:22.767661959 +0000 UTC m=+1.184002501" watchObservedRunningTime="2025-09-04 23:54:22.780527409 +0000 UTC m=+1.196867953" Sep 4 23:54:22.808504 kubelet[3173]: I0904 23:54:22.808441 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-112" podStartSLOduration=2.808418974 podStartE2EDuration="2.808418974s" podCreationTimestamp="2025-09-04 23:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:22.794707996 +0000 UTC m=+1.211048540" watchObservedRunningTime="2025-09-04 23:54:22.808418974 +0000 UTC m=+1.224759520" Sep 4 23:54:24.477882 sudo[2247]: pam_unix(sudo:session): session closed for user root Sep 4 23:54:24.499593 sshd[2246]: Connection closed by 139.178.68.195 port 46624 Sep 4 23:54:24.500508 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Sep 4 23:54:24.503307 systemd[1]: sshd@6-172.31.21.112:22-139.178.68.195:46624.service: Deactivated successfully. Sep 4 23:54:24.505266 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:54:24.505482 systemd[1]: session-7.scope: Consumed 4.764s CPU time, 208M memory peak. Sep 4 23:54:24.507448 systemd-logind[1900]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:54:24.508698 systemd-logind[1900]: Removed session 7. Sep 4 23:54:26.015721 kubelet[3173]: I0904 23:54:26.015691 3173 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:54:26.017815 containerd[1925]: time="2025-09-04T23:54:26.016303516Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:54:26.018266 kubelet[3173]: I0904 23:54:26.016629 3173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:54:26.987236 systemd[1]: Created slice kubepods-besteffort-pod6c6bbbe0_81e3_4375_8d3b_09e51ff616f6.slice - libcontainer container kubepods-besteffort-pod6c6bbbe0_81e3_4375_8d3b_09e51ff616f6.slice. Sep 4 23:54:27.001570 systemd[1]: Created slice kubepods-burstable-podb51dd85d_abb8_4199_9fee_c8fd3481d84a.slice - libcontainer container kubepods-burstable-podb51dd85d_abb8_4199_9fee_c8fd3481d84a.slice. Sep 4 23:54:27.012093 kubelet[3173]: I0904 23:54:27.012056 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c6bbbe0-81e3-4375-8d3b-09e51ff616f6-xtables-lock\") pod \"kube-proxy-nx697\" (UID: \"6c6bbbe0-81e3-4375-8d3b-09e51ff616f6\") " pod="kube-system/kube-proxy-nx697" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012102 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-xtables-lock\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012138 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hubble-tls\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012202 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-etc-cni-netd\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012243 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-config-path\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012269 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-net\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012290 kubelet[3173]: I0904 23:54:27.012291 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cni-path\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012314 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-lib-modules\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012337 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6j8r\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-kube-api-access-r6j8r\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012367 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c6bbbe0-81e3-4375-8d3b-09e51ff616f6-lib-modules\") pod \"kube-proxy-nx697\" (UID: \"6c6bbbe0-81e3-4375-8d3b-09e51ff616f6\") " pod="kube-system/kube-proxy-nx697" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012390 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-bpf-maps\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012415 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-kernel\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012557 kubelet[3173]: I0904 23:54:27.012445 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c6bbbe0-81e3-4375-8d3b-09e51ff616f6-kube-proxy\") pod \"kube-proxy-nx697\" (UID: \"6c6bbbe0-81e3-4375-8d3b-09e51ff616f6\") " pod="kube-system/kube-proxy-nx697" Sep 4 23:54:27.012776 kubelet[3173]: I0904 23:54:27.012469 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv586\" (UniqueName: \"kubernetes.io/projected/6c6bbbe0-81e3-4375-8d3b-09e51ff616f6-kube-api-access-jv586\") pod \"kube-proxy-nx697\" (UID: \"6c6bbbe0-81e3-4375-8d3b-09e51ff616f6\") " pod="kube-system/kube-proxy-nx697" Sep 4 23:54:27.012776 kubelet[3173]: I0904 23:54:27.012493 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hostproc\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012776 kubelet[3173]: I0904 23:54:27.012518 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-cgroup\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012776 kubelet[3173]: I0904 23:54:27.012547 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b51dd85d-abb8-4199-9fee-c8fd3481d84a-clustermesh-secrets\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.012776 kubelet[3173]: I0904 23:54:27.012574 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-run\") pod \"cilium-bmtr7\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " pod="kube-system/cilium-bmtr7" Sep 4 23:54:27.060589 systemd[1]: Created slice kubepods-besteffort-pod13b98d83_4540_46af_a1b3_f74609a0712a.slice - libcontainer container kubepods-besteffort-pod13b98d83_4540_46af_a1b3_f74609a0712a.slice. Sep 4 23:54:27.113658 kubelet[3173]: I0904 23:54:27.113236 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k728d\" (UniqueName: \"kubernetes.io/projected/13b98d83-4540-46af-a1b3-f74609a0712a-kube-api-access-k728d\") pod \"cilium-operator-6c4d7847fc-r2mp9\" (UID: \"13b98d83-4540-46af-a1b3-f74609a0712a\") " pod="kube-system/cilium-operator-6c4d7847fc-r2mp9" Sep 4 23:54:27.113658 kubelet[3173]: I0904 23:54:27.113314 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b98d83-4540-46af-a1b3-f74609a0712a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r2mp9\" (UID: \"13b98d83-4540-46af-a1b3-f74609a0712a\") " pod="kube-system/cilium-operator-6c4d7847fc-r2mp9" Sep 4 23:54:27.299285 containerd[1925]: time="2025-09-04T23:54:27.299183730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx697,Uid:6c6bbbe0-81e3-4375-8d3b-09e51ff616f6,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:27.309567 containerd[1925]: time="2025-09-04T23:54:27.309285072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmtr7,Uid:b51dd85d-abb8-4199-9fee-c8fd3481d84a,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:27.344373 containerd[1925]: time="2025-09-04T23:54:27.344045245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:27.344373 containerd[1925]: time="2025-09-04T23:54:27.344311145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:27.344710 containerd[1925]: time="2025-09-04T23:54:27.344661722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.347885 containerd[1925]: time="2025-09-04T23:54:27.347382767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.362556 containerd[1925]: time="2025-09-04T23:54:27.362276462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:27.362556 containerd[1925]: time="2025-09-04T23:54:27.362401415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:27.362556 containerd[1925]: time="2025-09-04T23:54:27.362412010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.362556 containerd[1925]: time="2025-09-04T23:54:27.362482015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.367812 containerd[1925]: time="2025-09-04T23:54:27.367764186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r2mp9,Uid:13b98d83-4540-46af-a1b3-f74609a0712a,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:27.369962 systemd[1]: Started cri-containerd-60d7c6f3e783996f8f91834ef9f88734b8786af6ca76746ec38f6694c234990a.scope - libcontainer container 60d7c6f3e783996f8f91834ef9f88734b8786af6ca76746ec38f6694c234990a. Sep 4 23:54:27.378387 systemd[1]: Started cri-containerd-d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18.scope - libcontainer container d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18. Sep 4 23:54:27.415954 containerd[1925]: time="2025-09-04T23:54:27.415750282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmtr7,Uid:b51dd85d-abb8-4199-9fee-c8fd3481d84a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\"" Sep 4 23:54:27.420036 containerd[1925]: time="2025-09-04T23:54:27.418330781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:27.420036 containerd[1925]: time="2025-09-04T23:54:27.418385588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:27.420036 containerd[1925]: time="2025-09-04T23:54:27.418399142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.420036 containerd[1925]: time="2025-09-04T23:54:27.418481778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:27.420727 containerd[1925]: time="2025-09-04T23:54:27.420416426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:54:27.432238 containerd[1925]: time="2025-09-04T23:54:27.432210821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx697,Uid:6c6bbbe0-81e3-4375-8d3b-09e51ff616f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d7c6f3e783996f8f91834ef9f88734b8786af6ca76746ec38f6694c234990a\"" Sep 4 23:54:27.440959 systemd[1]: Started cri-containerd-eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd.scope - libcontainer container eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd. Sep 4 23:54:27.441782 containerd[1925]: time="2025-09-04T23:54:27.441407801Z" level=info msg="CreateContainer within sandbox \"60d7c6f3e783996f8f91834ef9f88734b8786af6ca76746ec38f6694c234990a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:54:27.473155 containerd[1925]: time="2025-09-04T23:54:27.473122171Z" level=info msg="CreateContainer within sandbox \"60d7c6f3e783996f8f91834ef9f88734b8786af6ca76746ec38f6694c234990a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28f0a4e54d70acbb038f20f442cd9fb5d58c5843f06b118b9f1e25e4f04285c4\"" Sep 4 23:54:27.474733 containerd[1925]: time="2025-09-04T23:54:27.474473565Z" level=info msg="StartContainer for \"28f0a4e54d70acbb038f20f442cd9fb5d58c5843f06b118b9f1e25e4f04285c4\"" Sep 4 23:54:27.487659 containerd[1925]: time="2025-09-04T23:54:27.487513996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r2mp9,Uid:13b98d83-4540-46af-a1b3-f74609a0712a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\"" Sep 4 23:54:27.513960 systemd[1]: Started cri-containerd-28f0a4e54d70acbb038f20f442cd9fb5d58c5843f06b118b9f1e25e4f04285c4.scope - libcontainer container 28f0a4e54d70acbb038f20f442cd9fb5d58c5843f06b118b9f1e25e4f04285c4. Sep 4 23:54:27.546170 containerd[1925]: time="2025-09-04T23:54:27.546136521Z" level=info msg="StartContainer for \"28f0a4e54d70acbb038f20f442cd9fb5d58c5843f06b118b9f1e25e4f04285c4\" returns successfully" Sep 4 23:54:28.035723 kubelet[3173]: I0904 23:54:28.035657 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nx697" podStartSLOduration=2.035634925 podStartE2EDuration="2.035634925s" podCreationTimestamp="2025-09-04 23:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:27.751564939 +0000 UTC m=+6.167905483" watchObservedRunningTime="2025-09-04 23:54:28.035634925 +0000 UTC m=+6.451975469" Sep 4 23:54:32.296080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596965702.mount: Deactivated successfully. Sep 4 23:54:34.461263 update_engine[1902]: I20250904 23:54:34.461198 1902 update_attempter.cc:509] Updating boot flags... Sep 4 23:54:34.643908 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3578) Sep 4 23:54:35.448310 containerd[1925]: time="2025-09-04T23:54:35.448258011Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:35.455975 containerd[1925]: time="2025-09-04T23:54:35.455872115Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:54:35.458440 containerd[1925]: time="2025-09-04T23:54:35.458383373Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:35.459842 containerd[1925]: time="2025-09-04T23:54:35.459587210Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.039140599s" Sep 4 23:54:35.459842 containerd[1925]: time="2025-09-04T23:54:35.459615814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:54:35.463188 containerd[1925]: time="2025-09-04T23:54:35.462976598Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:54:35.463727 containerd[1925]: time="2025-09-04T23:54:35.463705361Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:54:35.539810 containerd[1925]: time="2025-09-04T23:54:35.539733580Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\"" Sep 4 23:54:35.540868 containerd[1925]: time="2025-09-04T23:54:35.540384726Z" level=info msg="StartContainer for \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\"" Sep 4 23:54:35.646915 systemd[1]: Started cri-containerd-5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd.scope - libcontainer container 5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd. Sep 4 23:54:35.682872 containerd[1925]: time="2025-09-04T23:54:35.682837651Z" level=info msg="StartContainer for \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\" returns successfully" Sep 4 23:54:35.698294 systemd[1]: cri-containerd-5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd.scope: Deactivated successfully. Sep 4 23:54:35.889710 containerd[1925]: time="2025-09-04T23:54:35.869677996Z" level=info msg="shim disconnected" id=5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd namespace=k8s.io Sep 4 23:54:35.889710 containerd[1925]: time="2025-09-04T23:54:35.889710002Z" level=warning msg="cleaning up after shim disconnected" id=5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd namespace=k8s.io Sep 4 23:54:35.889940 containerd[1925]: time="2025-09-04T23:54:35.889723754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:54:35.902368 containerd[1925]: time="2025-09-04T23:54:35.902297054Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:54:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:54:36.536063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd-rootfs.mount: Deactivated successfully. Sep 4 23:54:36.679166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024408902.mount: Deactivated successfully. Sep 4 23:54:36.791621 containerd[1925]: time="2025-09-04T23:54:36.791269528Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:54:36.838572 containerd[1925]: time="2025-09-04T23:54:36.838446750Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\"" Sep 4 23:54:36.850745 containerd[1925]: time="2025-09-04T23:54:36.849970716Z" level=info msg="StartContainer for \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\"" Sep 4 23:54:36.909947 systemd[1]: Started cri-containerd-56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f.scope - libcontainer container 56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f. Sep 4 23:54:36.947365 containerd[1925]: time="2025-09-04T23:54:36.947330769Z" level=info msg="StartContainer for \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\" returns successfully" Sep 4 23:54:36.957371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:54:36.958069 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:54:36.958206 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:54:36.964061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:54:36.964248 systemd[1]: cri-containerd-56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f.scope: Deactivated successfully. Sep 4 23:54:36.991479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:54:37.003473 containerd[1925]: time="2025-09-04T23:54:37.003398982Z" level=info msg="shim disconnected" id=56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f namespace=k8s.io Sep 4 23:54:37.003901 containerd[1925]: time="2025-09-04T23:54:37.003479442Z" level=warning msg="cleaning up after shim disconnected" id=56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f namespace=k8s.io Sep 4 23:54:37.003901 containerd[1925]: time="2025-09-04T23:54:37.003493533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:54:37.017464 containerd[1925]: time="2025-09-04T23:54:37.017405834Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:54:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:54:37.536677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570627167.mount: Deactivated successfully. Sep 4 23:54:37.806867 containerd[1925]: time="2025-09-04T23:54:37.805730311Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:54:37.841693 containerd[1925]: time="2025-09-04T23:54:37.841661024Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\"" Sep 4 23:54:37.842638 containerd[1925]: time="2025-09-04T23:54:37.842608133Z" level=info msg="StartContainer for \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\"" Sep 4 23:54:37.886045 systemd[1]: Started cri-containerd-b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9.scope - libcontainer container b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9. Sep 4 23:54:37.937492 containerd[1925]: time="2025-09-04T23:54:37.937267541Z" level=info msg="StartContainer for \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\" returns successfully" Sep 4 23:54:37.975286 systemd[1]: cri-containerd-b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9.scope: Deactivated successfully. Sep 4 23:54:37.975532 systemd[1]: cri-containerd-b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9.scope: Consumed 18ms CPU time, 5.3M memory peak, 1M read from disk. Sep 4 23:54:38.067742 containerd[1925]: time="2025-09-04T23:54:38.067171578Z" level=info msg="shim disconnected" id=b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9 namespace=k8s.io Sep 4 23:54:38.067742 containerd[1925]: time="2025-09-04T23:54:38.067227452Z" level=warning msg="cleaning up after shim disconnected" id=b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9 namespace=k8s.io Sep 4 23:54:38.067742 containerd[1925]: time="2025-09-04T23:54:38.067238350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:54:38.315489 containerd[1925]: time="2025-09-04T23:54:38.315443705Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:38.316342 containerd[1925]: time="2025-09-04T23:54:38.316175742Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:54:38.318465 containerd[1925]: time="2025-09-04T23:54:38.317130626Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:54:38.318465 containerd[1925]: time="2025-09-04T23:54:38.318304677Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.855302604s" Sep 4 23:54:38.318465 containerd[1925]: time="2025-09-04T23:54:38.318331081Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:54:38.320729 containerd[1925]: time="2025-09-04T23:54:38.320701787Z" level=info msg="CreateContainer within sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:54:38.337405 containerd[1925]: time="2025-09-04T23:54:38.337344342Z" level=info msg="CreateContainer within sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\"" Sep 4 23:54:38.337926 containerd[1925]: time="2025-09-04T23:54:38.337904250Z" level=info msg="StartContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\"" Sep 4 23:54:38.369978 systemd[1]: Started cri-containerd-2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d.scope - libcontainer container 2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d. Sep 4 23:54:38.409413 containerd[1925]: time="2025-09-04T23:54:38.409376753Z" level=info msg="StartContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" returns successfully" Sep 4 23:54:38.543362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9-rootfs.mount: Deactivated successfully. Sep 4 23:54:38.814987 containerd[1925]: time="2025-09-04T23:54:38.814830893Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:54:38.839882 containerd[1925]: time="2025-09-04T23:54:38.836812941Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\"" Sep 4 23:54:38.839882 containerd[1925]: time="2025-09-04T23:54:38.837914212Z" level=info msg="StartContainer for \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\"" Sep 4 23:54:38.839285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393526987.mount: Deactivated successfully. Sep 4 23:54:38.912960 systemd[1]: Started cri-containerd-5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf.scope - libcontainer container 5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf. Sep 4 23:54:38.922184 kubelet[3173]: I0904 23:54:38.921920 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r2mp9" podStartSLOduration=1.091203892 podStartE2EDuration="11.921874389s" podCreationTimestamp="2025-09-04 23:54:27 +0000 UTC" firstStartedPulling="2025-09-04 23:54:27.488608761 +0000 UTC m=+5.904949286" lastFinishedPulling="2025-09-04 23:54:38.319279262 +0000 UTC m=+16.735619783" observedRunningTime="2025-09-04 23:54:38.829002973 +0000 UTC m=+17.245343518" watchObservedRunningTime="2025-09-04 23:54:38.921874389 +0000 UTC m=+17.338214933" Sep 4 23:54:39.010984 containerd[1925]: time="2025-09-04T23:54:39.010939410Z" level=info msg="StartContainer for \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\" returns successfully" Sep 4 23:54:39.021312 systemd[1]: cri-containerd-5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf.scope: Deactivated successfully. Sep 4 23:54:39.136862 containerd[1925]: time="2025-09-04T23:54:39.132877944Z" level=info msg="shim disconnected" id=5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf namespace=k8s.io Sep 4 23:54:39.136862 containerd[1925]: time="2025-09-04T23:54:39.132950256Z" level=warning msg="cleaning up after shim disconnected" id=5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf namespace=k8s.io Sep 4 23:54:39.136862 containerd[1925]: time="2025-09-04T23:54:39.132964600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:54:39.537300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf-rootfs.mount: Deactivated successfully. Sep 4 23:54:39.816630 containerd[1925]: time="2025-09-04T23:54:39.816438584Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:54:39.836019 containerd[1925]: time="2025-09-04T23:54:39.835978266Z" level=info msg="CreateContainer within sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\"" Sep 4 23:54:39.837362 containerd[1925]: time="2025-09-04T23:54:39.836574880Z" level=info msg="StartContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\"" Sep 4 23:54:39.869966 systemd[1]: Started cri-containerd-167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112.scope - libcontainer container 167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112. Sep 4 23:54:39.906706 containerd[1925]: time="2025-09-04T23:54:39.906675416Z" level=info msg="StartContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" returns successfully" Sep 4 23:54:40.231939 kubelet[3173]: I0904 23:54:40.231173 3173 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:54:40.311136 systemd[1]: Created slice kubepods-burstable-pod436d9115_d030_43a0_abfe_120fb12689f6.slice - libcontainer container kubepods-burstable-pod436d9115_d030_43a0_abfe_120fb12689f6.slice. Sep 4 23:54:40.315544 kubelet[3173]: I0904 23:54:40.315489 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/436d9115-d030-43a0-abfe-120fb12689f6-config-volume\") pod \"coredns-668d6bf9bc-prws6\" (UID: \"436d9115-d030-43a0-abfe-120fb12689f6\") " pod="kube-system/coredns-668d6bf9bc-prws6" Sep 4 23:54:40.315544 kubelet[3173]: I0904 23:54:40.315520 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjqk\" (UniqueName: \"kubernetes.io/projected/436d9115-d030-43a0-abfe-120fb12689f6-kube-api-access-rkjqk\") pod \"coredns-668d6bf9bc-prws6\" (UID: \"436d9115-d030-43a0-abfe-120fb12689f6\") " pod="kube-system/coredns-668d6bf9bc-prws6" Sep 4 23:54:40.319495 systemd[1]: Created slice kubepods-burstable-pod271f8389_7bd5_44d5_8805_5d150b370739.slice - libcontainer container kubepods-burstable-pod271f8389_7bd5_44d5_8805_5d150b370739.slice. Sep 4 23:54:40.416537 kubelet[3173]: I0904 23:54:40.416486 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznmg\" (UniqueName: \"kubernetes.io/projected/271f8389-7bd5-44d5-8805-5d150b370739-kube-api-access-wznmg\") pod \"coredns-668d6bf9bc-w9h65\" (UID: \"271f8389-7bd5-44d5-8805-5d150b370739\") " pod="kube-system/coredns-668d6bf9bc-w9h65" Sep 4 23:54:40.417186 kubelet[3173]: I0904 23:54:40.417141 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271f8389-7bd5-44d5-8805-5d150b370739-config-volume\") pod \"coredns-668d6bf9bc-w9h65\" (UID: \"271f8389-7bd5-44d5-8805-5d150b370739\") " pod="kube-system/coredns-668d6bf9bc-w9h65" Sep 4 23:54:40.617337 containerd[1925]: time="2025-09-04T23:54:40.617218794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prws6,Uid:436d9115-d030-43a0-abfe-120fb12689f6,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:40.624941 containerd[1925]: time="2025-09-04T23:54:40.624655517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w9h65,Uid:271f8389-7bd5-44d5-8805-5d150b370739,Namespace:kube-system,Attempt:0,}" Sep 4 23:54:40.833868 kubelet[3173]: I0904 23:54:40.833810 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bmtr7" podStartSLOduration=6.790229762 podStartE2EDuration="14.833776542s" podCreationTimestamp="2025-09-04 23:54:26 +0000 UTC" firstStartedPulling="2025-09-04 23:54:27.419147251 +0000 UTC m=+5.835487775" lastFinishedPulling="2025-09-04 23:54:35.462694034 +0000 UTC m=+13.879034555" observedRunningTime="2025-09-04 23:54:40.83363023 +0000 UTC m=+19.249970774" watchObservedRunningTime="2025-09-04 23:54:40.833776542 +0000 UTC m=+19.250117077" Sep 4 23:54:43.246317 (udev-worker)[4066]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:54:43.249620 systemd-networkd[1828]: cilium_host: Link UP Sep 4 23:54:43.249734 systemd-networkd[1828]: cilium_net: Link UP Sep 4 23:54:43.249908 systemd-networkd[1828]: cilium_net: Gained carrier Sep 4 23:54:43.250046 systemd-networkd[1828]: cilium_host: Gained carrier Sep 4 23:54:43.254185 (udev-worker)[4102]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:54:43.430588 (udev-worker)[4110]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:54:43.436615 systemd-networkd[1828]: cilium_vxlan: Link UP Sep 4 23:54:43.436621 systemd-networkd[1828]: cilium_vxlan: Gained carrier Sep 4 23:54:43.840001 systemd-networkd[1828]: cilium_host: Gained IPv6LL Sep 4 23:54:44.288156 systemd-networkd[1828]: cilium_net: Gained IPv6LL Sep 4 23:54:44.397831 kernel: NET: Registered PF_ALG protocol family Sep 4 23:54:44.864219 systemd-networkd[1828]: cilium_vxlan: Gained IPv6LL Sep 4 23:54:45.034332 systemd-networkd[1828]: lxc_health: Link UP Sep 4 23:54:45.045003 systemd-networkd[1828]: lxc_health: Gained carrier Sep 4 23:54:45.203010 systemd-networkd[1828]: lxc69f4e83a9758: Link UP Sep 4 23:54:45.213815 kernel: eth0: renamed from tmpad434 Sep 4 23:54:45.217184 (udev-worker)[4111]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:54:45.220994 systemd-networkd[1828]: lxc69f4e83a9758: Gained carrier Sep 4 23:54:45.221128 systemd-networkd[1828]: lxc9230ae9d2fd4: Link UP Sep 4 23:54:45.225828 kernel: eth0: renamed from tmpf6f85 Sep 4 23:54:45.231732 systemd-networkd[1828]: lxc9230ae9d2fd4: Gained carrier Sep 4 23:54:46.336046 systemd-networkd[1828]: lxc_health: Gained IPv6LL Sep 4 23:54:46.656064 systemd-networkd[1828]: lxc9230ae9d2fd4: Gained IPv6LL Sep 4 23:54:46.848144 systemd-networkd[1828]: lxc69f4e83a9758: Gained IPv6LL Sep 4 23:54:49.012472 containerd[1925]: time="2025-09-04T23:54:49.012036225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:49.012472 containerd[1925]: time="2025-09-04T23:54:49.012119054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:49.012472 containerd[1925]: time="2025-09-04T23:54:49.012144100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:49.012472 containerd[1925]: time="2025-09-04T23:54:49.012242014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:49.038876 containerd[1925]: time="2025-09-04T23:54:49.036744890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:54:49.038876 containerd[1925]: time="2025-09-04T23:54:49.037398969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:54:49.038876 containerd[1925]: time="2025-09-04T23:54:49.037459050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:49.038876 containerd[1925]: time="2025-09-04T23:54:49.037585920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:54:49.093220 systemd[1]: Started cri-containerd-ad434e3f158bde0e73560ed469b62767e147bce69b8a97d4c3ab369cf50c2dd1.scope - libcontainer container ad434e3f158bde0e73560ed469b62767e147bce69b8a97d4c3ab369cf50c2dd1. Sep 4 23:54:49.123955 systemd[1]: Started cri-containerd-f6f858a73d32d39e3e7532e169a317e9df7b58491abdf8f7ea4a0daad8c8d0ef.scope - libcontainer container f6f858a73d32d39e3e7532e169a317e9df7b58491abdf8f7ea4a0daad8c8d0ef. Sep 4 23:54:49.209057 containerd[1925]: time="2025-09-04T23:54:49.209015314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w9h65,Uid:271f8389-7bd5-44d5-8805-5d150b370739,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad434e3f158bde0e73560ed469b62767e147bce69b8a97d4c3ab369cf50c2dd1\"" Sep 4 23:54:49.216263 containerd[1925]: time="2025-09-04T23:54:49.216222692Z" level=info msg="CreateContainer within sandbox \"ad434e3f158bde0e73560ed469b62767e147bce69b8a97d4c3ab369cf50c2dd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:54:49.243031 containerd[1925]: time="2025-09-04T23:54:49.241700457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prws6,Uid:436d9115-d030-43a0-abfe-120fb12689f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6f858a73d32d39e3e7532e169a317e9df7b58491abdf8f7ea4a0daad8c8d0ef\"" Sep 4 23:54:49.247342 containerd[1925]: time="2025-09-04T23:54:49.247182160Z" level=info msg="CreateContainer within sandbox \"f6f858a73d32d39e3e7532e169a317e9df7b58491abdf8f7ea4a0daad8c8d0ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:54:49.271844 containerd[1925]: time="2025-09-04T23:54:49.270176485Z" level=info msg="CreateContainer within sandbox \"ad434e3f158bde0e73560ed469b62767e147bce69b8a97d4c3ab369cf50c2dd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22a81df4912f8cf1010ab4b30c47e3c8f4a93c523b63329b00f6d40f32697466\"" Sep 4 23:54:49.272319 containerd[1925]: time="2025-09-04T23:54:49.272107766Z" level=info msg="CreateContainer within sandbox \"f6f858a73d32d39e3e7532e169a317e9df7b58491abdf8f7ea4a0daad8c8d0ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"076f5c1d7977519319040cb1f36e543e8a4426f98860cb3bc7cd80163d6576ed\"" Sep 4 23:54:49.272319 containerd[1925]: time="2025-09-04T23:54:49.272192322Z" level=info msg="StartContainer for \"22a81df4912f8cf1010ab4b30c47e3c8f4a93c523b63329b00f6d40f32697466\"" Sep 4 23:54:49.272602 containerd[1925]: time="2025-09-04T23:54:49.272537918Z" level=info msg="StartContainer for \"076f5c1d7977519319040cb1f36e543e8a4426f98860cb3bc7cd80163d6576ed\"" Sep 4 23:54:49.317959 systemd[1]: Started cri-containerd-076f5c1d7977519319040cb1f36e543e8a4426f98860cb3bc7cd80163d6576ed.scope - libcontainer container 076f5c1d7977519319040cb1f36e543e8a4426f98860cb3bc7cd80163d6576ed. Sep 4 23:54:49.320805 systemd[1]: Started cri-containerd-22a81df4912f8cf1010ab4b30c47e3c8f4a93c523b63329b00f6d40f32697466.scope - libcontainer container 22a81df4912f8cf1010ab4b30c47e3c8f4a93c523b63329b00f6d40f32697466. Sep 4 23:54:49.366331 containerd[1925]: time="2025-09-04T23:54:49.366219265Z" level=info msg="StartContainer for \"22a81df4912f8cf1010ab4b30c47e3c8f4a93c523b63329b00f6d40f32697466\" returns successfully" Sep 4 23:54:49.369703 containerd[1925]: time="2025-09-04T23:54:49.369671929Z" level=info msg="StartContainer for \"076f5c1d7977519319040cb1f36e543e8a4426f98860cb3bc7cd80163d6576ed\" returns successfully" Sep 4 23:54:49.855232 kubelet[3173]: I0904 23:54:49.855183 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w9h65" podStartSLOduration=22.855168597 podStartE2EDuration="22.855168597s" podCreationTimestamp="2025-09-04 23:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:49.853770325 +0000 UTC m=+28.270110869" watchObservedRunningTime="2025-09-04 23:54:49.855168597 +0000 UTC m=+28.271509139" Sep 4 23:54:50.862425 kubelet[3173]: I0904 23:54:50.862341 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-prws6" podStartSLOduration=23.862322953 podStartE2EDuration="23.862322953s" podCreationTimestamp="2025-09-04 23:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:54:49.869778232 +0000 UTC m=+28.286118772" watchObservedRunningTime="2025-09-04 23:54:50.862322953 +0000 UTC m=+29.278663498" Sep 4 23:54:52.000838 ntpd[1892]: Listen normally on 8 cilium_host 192.168.0.62:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 8 cilium_host 192.168.0.62:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 9 cilium_net [fe80::9877:d1ff:fe43:ebf8%4]:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 10 cilium_host [fe80::50e9:eaff:fea1:1887%5]:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 11 cilium_vxlan [fe80::98e5:9aff:feee:b013%6]:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 12 lxc_health [fe80::24fb:70ff:fefd:923a%8]:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 13 lxc69f4e83a9758 [fe80::e83a:d4ff:fec9:d122%10]:123 Sep 4 23:54:52.001545 ntpd[1892]: 4 Sep 23:54:51 ntpd[1892]: Listen normally on 14 lxc9230ae9d2fd4 [fe80::8c8c:22ff:fe8e:153a%12]:123 Sep 4 23:54:52.000923 ntpd[1892]: Listen normally on 9 cilium_net [fe80::9877:d1ff:fe43:ebf8%4]:123 Sep 4 23:54:52.000970 ntpd[1892]: Listen normally on 10 cilium_host [fe80::50e9:eaff:fea1:1887%5]:123 Sep 4 23:54:52.001001 ntpd[1892]: Listen normally on 11 cilium_vxlan [fe80::98e5:9aff:feee:b013%6]:123 Sep 4 23:54:52.001028 ntpd[1892]: Listen normally on 12 lxc_health [fe80::24fb:70ff:fefd:923a%8]:123 Sep 4 23:54:52.001056 ntpd[1892]: Listen normally on 13 lxc69f4e83a9758 [fe80::e83a:d4ff:fec9:d122%10]:123 Sep 4 23:54:52.001084 ntpd[1892]: Listen normally on 14 lxc9230ae9d2fd4 [fe80::8c8c:22ff:fe8e:153a%12]:123 Sep 4 23:55:07.401038 systemd[1]: Started sshd@7-172.31.21.112:22-139.178.68.195:37764.service - OpenSSH per-connection server daemon (139.178.68.195:37764). Sep 4 23:55:07.596890 sshd[4644]: Accepted publickey for core from 139.178.68.195 port 37764 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:07.599108 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:07.604311 systemd-logind[1900]: New session 8 of user core. Sep 4 23:55:07.613932 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:55:08.383258 sshd[4646]: Connection closed by 139.178.68.195 port 37764 Sep 4 23:55:08.384769 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:08.387607 systemd[1]: sshd@7-172.31.21.112:22-139.178.68.195:37764.service: Deactivated successfully. Sep 4 23:55:08.389533 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:55:08.390822 systemd-logind[1900]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:55:08.391757 systemd-logind[1900]: Removed session 8. Sep 4 23:55:13.423051 systemd[1]: Started sshd@8-172.31.21.112:22-139.178.68.195:52748.service - OpenSSH per-connection server daemon (139.178.68.195:52748). Sep 4 23:55:13.582824 sshd[4658]: Accepted publickey for core from 139.178.68.195 port 52748 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:13.584291 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:13.589162 systemd-logind[1900]: New session 9 of user core. Sep 4 23:55:13.591946 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:55:13.793352 sshd[4660]: Connection closed by 139.178.68.195 port 52748 Sep 4 23:55:13.794340 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:13.798457 systemd-logind[1900]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:55:13.799229 systemd[1]: sshd@8-172.31.21.112:22-139.178.68.195:52748.service: Deactivated successfully. Sep 4 23:55:13.801283 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:55:13.802886 systemd-logind[1900]: Removed session 9. Sep 4 23:55:18.825474 systemd[1]: Started sshd@9-172.31.21.112:22-139.178.68.195:52764.service - OpenSSH per-connection server daemon (139.178.68.195:52764). Sep 4 23:55:18.989182 sshd[4674]: Accepted publickey for core from 139.178.68.195 port 52764 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:18.990423 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:18.994964 systemd-logind[1900]: New session 10 of user core. Sep 4 23:55:19.001937 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:55:19.180194 sshd[4676]: Connection closed by 139.178.68.195 port 52764 Sep 4 23:55:19.180976 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:19.184872 systemd[1]: sshd@9-172.31.21.112:22-139.178.68.195:52764.service: Deactivated successfully. Sep 4 23:55:19.186868 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:55:19.187659 systemd-logind[1900]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:55:19.188875 systemd-logind[1900]: Removed session 10. Sep 4 23:55:24.221215 systemd[1]: Started sshd@10-172.31.21.112:22-139.178.68.195:35990.service - OpenSSH per-connection server daemon (139.178.68.195:35990). Sep 4 23:55:24.394572 sshd[4693]: Accepted publickey for core from 139.178.68.195 port 35990 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:24.469290 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:24.474886 systemd-logind[1900]: New session 11 of user core. Sep 4 23:55:24.481948 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:55:24.670592 sshd[4695]: Connection closed by 139.178.68.195 port 35990 Sep 4 23:55:24.671883 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:24.675023 systemd[1]: sshd@10-172.31.21.112:22-139.178.68.195:35990.service: Deactivated successfully. Sep 4 23:55:24.677101 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:55:24.677973 systemd-logind[1900]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:55:24.679157 systemd-logind[1900]: Removed session 11. Sep 4 23:55:24.703332 systemd[1]: Started sshd@11-172.31.21.112:22-139.178.68.195:36002.service - OpenSSH per-connection server daemon (139.178.68.195:36002). Sep 4 23:55:24.871278 sshd[4707]: Accepted publickey for core from 139.178.68.195 port 36002 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:24.872412 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:24.876424 systemd-logind[1900]: New session 12 of user core. Sep 4 23:55:24.881956 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:55:25.123917 sshd[4709]: Connection closed by 139.178.68.195 port 36002 Sep 4 23:55:25.125802 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:25.129304 systemd[1]: sshd@11-172.31.21.112:22-139.178.68.195:36002.service: Deactivated successfully. Sep 4 23:55:25.132523 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:55:25.135297 systemd-logind[1900]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:55:25.136343 systemd-logind[1900]: Removed session 12. Sep 4 23:55:25.160066 systemd[1]: Started sshd@12-172.31.21.112:22-139.178.68.195:36016.service - OpenSSH per-connection server daemon (139.178.68.195:36016). Sep 4 23:55:25.353664 sshd[4719]: Accepted publickey for core from 139.178.68.195 port 36016 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:25.354994 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:25.359551 systemd-logind[1900]: New session 13 of user core. Sep 4 23:55:25.366929 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:55:25.552494 sshd[4721]: Connection closed by 139.178.68.195 port 36016 Sep 4 23:55:25.553034 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:25.556303 systemd-logind[1900]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:55:25.557091 systemd[1]: sshd@12-172.31.21.112:22-139.178.68.195:36016.service: Deactivated successfully. Sep 4 23:55:25.559014 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:55:25.560166 systemd-logind[1900]: Removed session 13. Sep 4 23:55:30.594033 systemd[1]: Started sshd@13-172.31.21.112:22-139.178.68.195:54546.service - OpenSSH per-connection server daemon (139.178.68.195:54546). Sep 4 23:55:30.750241 sshd[4734]: Accepted publickey for core from 139.178.68.195 port 54546 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:30.751701 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:30.759679 systemd-logind[1900]: New session 14 of user core. Sep 4 23:55:30.767960 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:55:30.957644 sshd[4736]: Connection closed by 139.178.68.195 port 54546 Sep 4 23:55:30.958189 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:30.968460 systemd-logind[1900]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:55:30.969214 systemd[1]: sshd@13-172.31.21.112:22-139.178.68.195:54546.service: Deactivated successfully. Sep 4 23:55:30.970756 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:55:30.971920 systemd-logind[1900]: Removed session 14. Sep 4 23:55:35.994201 systemd[1]: Started sshd@14-172.31.21.112:22-139.178.68.195:54558.service - OpenSSH per-connection server daemon (139.178.68.195:54558). Sep 4 23:55:36.170061 sshd[4748]: Accepted publickey for core from 139.178.68.195 port 54558 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:36.171500 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:36.176417 systemd-logind[1900]: New session 15 of user core. Sep 4 23:55:36.187953 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:55:36.370463 sshd[4750]: Connection closed by 139.178.68.195 port 54558 Sep 4 23:55:36.371261 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:36.373963 systemd[1]: sshd@14-172.31.21.112:22-139.178.68.195:54558.service: Deactivated successfully. Sep 4 23:55:36.376159 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:55:36.377422 systemd-logind[1900]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:55:36.378954 systemd-logind[1900]: Removed session 15. Sep 4 23:55:36.408053 systemd[1]: Started sshd@15-172.31.21.112:22-139.178.68.195:54564.service - OpenSSH per-connection server daemon (139.178.68.195:54564). Sep 4 23:55:36.566755 sshd[4762]: Accepted publickey for core from 139.178.68.195 port 54564 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:36.568050 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:36.572537 systemd-logind[1900]: New session 16 of user core. Sep 4 23:55:36.578917 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:55:37.225932 sshd[4764]: Connection closed by 139.178.68.195 port 54564 Sep 4 23:55:37.226966 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:37.230270 systemd-logind[1900]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:55:37.231040 systemd[1]: sshd@15-172.31.21.112:22-139.178.68.195:54564.service: Deactivated successfully. Sep 4 23:55:37.232920 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:55:37.233703 systemd-logind[1900]: Removed session 16. Sep 4 23:55:37.255688 systemd[1]: Started sshd@16-172.31.21.112:22-139.178.68.195:54576.service - OpenSSH per-connection server daemon (139.178.68.195:54576). Sep 4 23:55:37.440669 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 54576 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:37.441163 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:37.445902 systemd-logind[1900]: New session 17 of user core. Sep 4 23:55:37.453937 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:55:38.320611 sshd[4776]: Connection closed by 139.178.68.195 port 54576 Sep 4 23:55:38.321639 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:38.331017 systemd-logind[1900]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:55:38.331561 systemd[1]: sshd@16-172.31.21.112:22-139.178.68.195:54576.service: Deactivated successfully. Sep 4 23:55:38.333708 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:55:38.334611 systemd-logind[1900]: Removed session 17. Sep 4 23:55:38.350206 systemd[1]: Started sshd@17-172.31.21.112:22-139.178.68.195:54588.service - OpenSSH per-connection server daemon (139.178.68.195:54588). Sep 4 23:55:38.508986 sshd[4793]: Accepted publickey for core from 139.178.68.195 port 54588 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:38.510307 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:38.514906 systemd-logind[1900]: New session 18 of user core. Sep 4 23:55:38.524990 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:55:38.827497 sshd[4795]: Connection closed by 139.178.68.195 port 54588 Sep 4 23:55:38.828048 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:38.831564 systemd[1]: sshd@17-172.31.21.112:22-139.178.68.195:54588.service: Deactivated successfully. Sep 4 23:55:38.831885 systemd-logind[1900]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:55:38.833422 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:55:38.834621 systemd-logind[1900]: Removed session 18. Sep 4 23:55:38.865086 systemd[1]: Started sshd@18-172.31.21.112:22-139.178.68.195:54602.service - OpenSSH per-connection server daemon (139.178.68.195:54602). Sep 4 23:55:39.027859 sshd[4805]: Accepted publickey for core from 139.178.68.195 port 54602 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:39.029222 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:39.033355 systemd-logind[1900]: New session 19 of user core. Sep 4 23:55:39.040959 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:55:39.233369 sshd[4807]: Connection closed by 139.178.68.195 port 54602 Sep 4 23:55:39.233945 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:39.236693 systemd[1]: sshd@18-172.31.21.112:22-139.178.68.195:54602.service: Deactivated successfully. Sep 4 23:55:39.239375 systemd-logind[1900]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:55:39.240169 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:55:39.241108 systemd-logind[1900]: Removed session 19. Sep 4 23:55:44.269024 systemd[1]: Started sshd@19-172.31.21.112:22-139.178.68.195:39652.service - OpenSSH per-connection server daemon (139.178.68.195:39652). Sep 4 23:55:44.429686 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 39652 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:44.430963 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:44.435085 systemd-logind[1900]: New session 20 of user core. Sep 4 23:55:44.441911 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:55:44.616145 sshd[4820]: Connection closed by 139.178.68.195 port 39652 Sep 4 23:55:44.616931 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:44.620318 systemd[1]: sshd@19-172.31.21.112:22-139.178.68.195:39652.service: Deactivated successfully. Sep 4 23:55:44.622273 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:55:44.623009 systemd-logind[1900]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:55:44.624136 systemd-logind[1900]: Removed session 20. Sep 4 23:55:49.653067 systemd[1]: Started sshd@20-172.31.21.112:22-139.178.68.195:39654.service - OpenSSH per-connection server daemon (139.178.68.195:39654). Sep 4 23:55:49.816501 sshd[4834]: Accepted publickey for core from 139.178.68.195 port 39654 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:49.817744 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:49.821871 systemd-logind[1900]: New session 21 of user core. Sep 4 23:55:49.826941 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:55:50.010430 sshd[4836]: Connection closed by 139.178.68.195 port 39654 Sep 4 23:55:50.011139 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:50.014069 systemd[1]: sshd@20-172.31.21.112:22-139.178.68.195:39654.service: Deactivated successfully. Sep 4 23:55:50.016622 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:55:50.017623 systemd-logind[1900]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:55:50.018614 systemd-logind[1900]: Removed session 21. Sep 4 23:55:55.045146 systemd[1]: Started sshd@21-172.31.21.112:22-139.178.68.195:47822.service - OpenSSH per-connection server daemon (139.178.68.195:47822). Sep 4 23:55:55.230168 sshd[4849]: Accepted publickey for core from 139.178.68.195 port 47822 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:55:55.231464 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:55:55.237171 systemd-logind[1900]: New session 22 of user core. Sep 4 23:55:55.242926 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:55:55.440110 sshd[4851]: Connection closed by 139.178.68.195 port 47822 Sep 4 23:55:55.440961 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Sep 4 23:55:55.444980 systemd[1]: sshd@21-172.31.21.112:22-139.178.68.195:47822.service: Deactivated successfully. Sep 4 23:55:55.447542 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:55:55.448460 systemd-logind[1900]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:55:55.450040 systemd-logind[1900]: Removed session 22. Sep 4 23:56:00.476095 systemd[1]: Started sshd@22-172.31.21.112:22-139.178.68.195:38774.service - OpenSSH per-connection server daemon (139.178.68.195:38774). Sep 4 23:56:00.652814 sshd[4865]: Accepted publickey for core from 139.178.68.195 port 38774 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:56:00.695739 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:56:00.701060 systemd-logind[1900]: New session 23 of user core. Sep 4 23:56:00.718020 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:56:00.920644 sshd[4867]: Connection closed by 139.178.68.195 port 38774 Sep 4 23:56:00.921277 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Sep 4 23:56:00.924689 systemd[1]: sshd@22-172.31.21.112:22-139.178.68.195:38774.service: Deactivated successfully. Sep 4 23:56:00.926451 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:56:00.927207 systemd-logind[1900]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:56:00.928311 systemd-logind[1900]: Removed session 23. Sep 4 23:56:00.960132 systemd[1]: Started sshd@23-172.31.21.112:22-139.178.68.195:38784.service - OpenSSH per-connection server daemon (139.178.68.195:38784). Sep 4 23:56:01.126733 sshd[4879]: Accepted publickey for core from 139.178.68.195 port 38784 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:56:01.128068 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:56:01.133055 systemd-logind[1900]: New session 24 of user core. Sep 4 23:56:01.134967 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:56:04.667230 containerd[1925]: time="2025-09-04T23:56:04.666657028Z" level=info msg="StopContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" with timeout 30 (s)" Sep 4 23:56:04.671120 containerd[1925]: time="2025-09-04T23:56:04.671086308Z" level=info msg="Stop container \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" with signal terminated" Sep 4 23:56:04.682864 systemd[1]: cri-containerd-2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d.scope: Deactivated successfully. Sep 4 23:56:04.710406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d-rootfs.mount: Deactivated successfully. Sep 4 23:56:04.725485 containerd[1925]: time="2025-09-04T23:56:04.725424997Z" level=info msg="shim disconnected" id=2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d namespace=k8s.io Sep 4 23:56:04.725485 containerd[1925]: time="2025-09-04T23:56:04.725477447Z" level=warning msg="cleaning up after shim disconnected" id=2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d namespace=k8s.io Sep 4 23:56:04.725485 containerd[1925]: time="2025-09-04T23:56:04.725486291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:04.742748 containerd[1925]: time="2025-09-04T23:56:04.742709346Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:56:04.743119 containerd[1925]: time="2025-09-04T23:56:04.742989184Z" level=info msg="StopContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" returns successfully" Sep 4 23:56:04.769574 containerd[1925]: time="2025-09-04T23:56:04.769516814Z" level=info msg="StopPodSandbox for \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\"" Sep 4 23:56:04.769711 containerd[1925]: time="2025-09-04T23:56:04.769520356Z" level=info msg="StopContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" with timeout 2 (s)" Sep 4 23:56:04.770120 containerd[1925]: time="2025-09-04T23:56:04.770052150Z" level=info msg="Stop container \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" with signal terminated" Sep 4 23:56:04.777190 systemd-networkd[1828]: lxc_health: Link DOWN Sep 4 23:56:04.777200 systemd-networkd[1828]: lxc_health: Lost carrier Sep 4 23:56:04.784217 containerd[1925]: time="2025-09-04T23:56:04.770934881Z" level=info msg="Container to stop \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.783361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd-shm.mount: Deactivated successfully. Sep 4 23:56:04.798665 systemd[1]: cri-containerd-167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112.scope: Deactivated successfully. Sep 4 23:56:04.798954 systemd[1]: cri-containerd-167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112.scope: Consumed 6.675s CPU time, 202.7M memory peak, 79M read from disk, 13.3M written to disk. Sep 4 23:56:04.808691 systemd[1]: cri-containerd-eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd.scope: Deactivated successfully. Sep 4 23:56:04.832565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112-rootfs.mount: Deactivated successfully. Sep 4 23:56:04.853691 containerd[1925]: time="2025-09-04T23:56:04.853446944Z" level=info msg="shim disconnected" id=eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd namespace=k8s.io Sep 4 23:56:04.853691 containerd[1925]: time="2025-09-04T23:56:04.853511178Z" level=warning msg="cleaning up after shim disconnected" id=eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd namespace=k8s.io Sep 4 23:56:04.853691 containerd[1925]: time="2025-09-04T23:56:04.853524171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:04.854675 containerd[1925]: time="2025-09-04T23:56:04.854546877Z" level=info msg="shim disconnected" id=167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112 namespace=k8s.io Sep 4 23:56:04.854675 containerd[1925]: time="2025-09-04T23:56:04.854622462Z" level=warning msg="cleaning up after shim disconnected" id=167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112 namespace=k8s.io Sep 4 23:56:04.854675 containerd[1925]: time="2025-09-04T23:56:04.854634696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:04.880902 containerd[1925]: time="2025-09-04T23:56:04.880847075Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:56:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:56:04.882016 containerd[1925]: time="2025-09-04T23:56:04.881984131Z" level=info msg="TearDown network for sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" successfully" Sep 4 23:56:04.882016 containerd[1925]: time="2025-09-04T23:56:04.882010221Z" level=info msg="StopPodSandbox for \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" returns successfully" Sep 4 23:56:04.888289 containerd[1925]: time="2025-09-04T23:56:04.888241292Z" level=info msg="StopContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" returns successfully" Sep 4 23:56:04.889216 containerd[1925]: time="2025-09-04T23:56:04.889165305Z" level=info msg="StopPodSandbox for \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\"" Sep 4 23:56:04.889376 containerd[1925]: time="2025-09-04T23:56:04.889206706Z" level=info msg="Container to stop \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.889376 containerd[1925]: time="2025-09-04T23:56:04.889247633Z" level=info msg="Container to stop \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.889376 containerd[1925]: time="2025-09-04T23:56:04.889259965Z" level=info msg="Container to stop \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.889376 containerd[1925]: time="2025-09-04T23:56:04.889272634Z" level=info msg="Container to stop \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.889376 containerd[1925]: time="2025-09-04T23:56:04.889285684Z" level=info msg="Container to stop \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:56:04.902577 systemd[1]: cri-containerd-d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18.scope: Deactivated successfully. Sep 4 23:56:04.936882 containerd[1925]: time="2025-09-04T23:56:04.935004672Z" level=info msg="shim disconnected" id=d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18 namespace=k8s.io Sep 4 23:56:04.936882 containerd[1925]: time="2025-09-04T23:56:04.935066814Z" level=warning msg="cleaning up after shim disconnected" id=d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18 namespace=k8s.io Sep 4 23:56:04.936882 containerd[1925]: time="2025-09-04T23:56:04.935078566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:04.953279 containerd[1925]: time="2025-09-04T23:56:04.953219436Z" level=info msg="TearDown network for sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" successfully" Sep 4 23:56:04.953279 containerd[1925]: time="2025-09-04T23:56:04.953256624Z" level=info msg="StopPodSandbox for \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" returns successfully" Sep 4 23:56:04.994670 kubelet[3173]: I0904 23:56:04.994335 3173 scope.go:117] "RemoveContainer" containerID="2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d" Sep 4 23:56:05.028590 containerd[1925]: time="2025-09-04T23:56:05.028528676Z" level=info msg="RemoveContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\"" Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032413 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k728d\" (UniqueName: \"kubernetes.io/projected/13b98d83-4540-46af-a1b3-f74609a0712a-kube-api-access-k728d\") pod \"13b98d83-4540-46af-a1b3-f74609a0712a\" (UID: \"13b98d83-4540-46af-a1b3-f74609a0712a\") " Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032484 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-run\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032510 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hostproc\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032533 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b98d83-4540-46af-a1b3-f74609a0712a-cilium-config-path\") pod \"13b98d83-4540-46af-a1b3-f74609a0712a\" (UID: \"13b98d83-4540-46af-a1b3-f74609a0712a\") " Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032550 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-config-path\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032726 kubelet[3173]: I0904 23:56:05.032568 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6j8r\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-kube-api-access-r6j8r\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032993 kubelet[3173]: I0904 23:56:05.032581 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-cgroup\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032993 kubelet[3173]: I0904 23:56:05.032600 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hubble-tls\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032993 kubelet[3173]: I0904 23:56:05.032613 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-lib-modules\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.032993 kubelet[3173]: I0904 23:56:05.032629 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cni-path\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.035278 containerd[1925]: time="2025-09-04T23:56:05.035246336Z" level=info msg="RemoveContainer for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" returns successfully" Sep 4 23:56:05.042808 kubelet[3173]: I0904 23:56:05.041846 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.042808 kubelet[3173]: I0904 23:56:05.039734 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b98d83-4540-46af-a1b3-f74609a0712a-kube-api-access-k728d" (OuterVolumeSpecName: "kube-api-access-k728d") pod "13b98d83-4540-46af-a1b3-f74609a0712a" (UID: "13b98d83-4540-46af-a1b3-f74609a0712a"). InnerVolumeSpecName "kube-api-access-k728d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:56:05.042808 kubelet[3173]: I0904 23:56:05.041928 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hostproc" (OuterVolumeSpecName: "hostproc") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.042808 kubelet[3173]: I0904 23:56:05.041955 3173 scope.go:117] "RemoveContainer" containerID="2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d" Sep 4 23:56:05.042808 kubelet[3173]: I0904 23:56:05.039733 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cni-path" (OuterVolumeSpecName: "cni-path") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.043027 kubelet[3173]: I0904 23:56:05.042068 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.043407 containerd[1925]: time="2025-09-04T23:56:05.043212054Z" level=error msg="ContainerStatus for \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\": not found" Sep 4 23:56:05.044176 kubelet[3173]: I0904 23:56:05.044149 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b98d83-4540-46af-a1b3-f74609a0712a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13b98d83-4540-46af-a1b3-f74609a0712a" (UID: "13b98d83-4540-46af-a1b3-f74609a0712a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:56:05.046941 kubelet[3173]: I0904 23:56:05.046632 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:56:05.046941 kubelet[3173]: I0904 23:56:05.046671 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.046941 kubelet[3173]: E0904 23:56:05.046776 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\": not found" containerID="2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d" Sep 4 23:56:05.048576 kubelet[3173]: I0904 23:56:05.048322 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:56:05.051673 kubelet[3173]: I0904 23:56:05.051634 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-kube-api-access-r6j8r" (OuterVolumeSpecName: "kube-api-access-r6j8r") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "kube-api-access-r6j8r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:56:05.065165 kubelet[3173]: I0904 23:56:05.053023 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d"} err="failed to get container status \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2232d6f43eb3eb5e19d8b8348f5a19588588f02edf4b10dec4107f7267608f9d\": not found" Sep 4 23:56:05.065165 kubelet[3173]: I0904 23:56:05.065168 3173 scope.go:117] "RemoveContainer" containerID="167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112" Sep 4 23:56:05.066505 containerd[1925]: time="2025-09-04T23:56:05.066453937Z" level=info msg="RemoveContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\"" Sep 4 23:56:05.071503 containerd[1925]: time="2025-09-04T23:56:05.071470811Z" level=info msg="RemoveContainer for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" returns successfully" Sep 4 23:56:05.071753 kubelet[3173]: I0904 23:56:05.071732 3173 scope.go:117] "RemoveContainer" containerID="5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf" Sep 4 23:56:05.073068 containerd[1925]: time="2025-09-04T23:56:05.072805053Z" level=info msg="RemoveContainer for \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\"" Sep 4 23:56:05.077850 containerd[1925]: time="2025-09-04T23:56:05.077818275Z" level=info msg="RemoveContainer for \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\" returns successfully" Sep 4 23:56:05.078038 kubelet[3173]: I0904 23:56:05.078011 3173 scope.go:117] "RemoveContainer" containerID="b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9" Sep 4 23:56:05.078826 containerd[1925]: time="2025-09-04T23:56:05.078802502Z" level=info msg="RemoveContainer for \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\"" Sep 4 23:56:05.084363 containerd[1925]: time="2025-09-04T23:56:05.084326244Z" level=info msg="RemoveContainer for \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\" returns successfully" Sep 4 23:56:05.084538 kubelet[3173]: I0904 23:56:05.084514 3173 scope.go:117] "RemoveContainer" containerID="56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f" Sep 4 23:56:05.085609 containerd[1925]: time="2025-09-04T23:56:05.085479900Z" level=info msg="RemoveContainer for \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\"" Sep 4 23:56:05.091465 containerd[1925]: time="2025-09-04T23:56:05.091423031Z" level=info msg="RemoveContainer for \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\" returns successfully" Sep 4 23:56:05.091649 kubelet[3173]: I0904 23:56:05.091626 3173 scope.go:117] "RemoveContainer" containerID="5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd" Sep 4 23:56:05.092905 containerd[1925]: time="2025-09-04T23:56:05.092643594Z" level=info msg="RemoveContainer for \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\"" Sep 4 23:56:05.097933 containerd[1925]: time="2025-09-04T23:56:05.097889963Z" level=info msg="RemoveContainer for \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\" returns successfully" Sep 4 23:56:05.098171 kubelet[3173]: I0904 23:56:05.098139 3173 scope.go:117] "RemoveContainer" containerID="167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112" Sep 4 23:56:05.098445 containerd[1925]: time="2025-09-04T23:56:05.098409797Z" level=error msg="ContainerStatus for \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\": not found" Sep 4 23:56:05.098718 kubelet[3173]: E0904 23:56:05.098687 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\": not found" containerID="167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112" Sep 4 23:56:05.098802 kubelet[3173]: I0904 23:56:05.098720 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112"} err="failed to get container status \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\": rpc error: code = NotFound desc = an error occurred when try to find container \"167a7f12d1c2812e7a91800974a5f37d1a5fcae03d05f505c7f162e91699a112\": not found" Sep 4 23:56:05.098802 kubelet[3173]: I0904 23:56:05.098749 3173 scope.go:117] "RemoveContainer" containerID="5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf" Sep 4 23:56:05.098989 containerd[1925]: time="2025-09-04T23:56:05.098946411Z" level=error msg="ContainerStatus for \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\": not found" Sep 4 23:56:05.099113 kubelet[3173]: E0904 23:56:05.099087 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\": not found" containerID="5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf" Sep 4 23:56:05.099182 kubelet[3173]: I0904 23:56:05.099115 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf"} err="failed to get container status \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ca2eb2445c02fd5a2922d0506d673936d4e58c373a03728f7c619f6b5eb7bdf\": not found" Sep 4 23:56:05.099182 kubelet[3173]: I0904 23:56:05.099135 3173 scope.go:117] "RemoveContainer" containerID="b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9" Sep 4 23:56:05.099332 containerd[1925]: time="2025-09-04T23:56:05.099301872Z" level=error msg="ContainerStatus for \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\": not found" Sep 4 23:56:05.099436 kubelet[3173]: E0904 23:56:05.099410 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\": not found" containerID="b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9" Sep 4 23:56:05.099501 kubelet[3173]: I0904 23:56:05.099439 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9"} err="failed to get container status \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b58df2c26559452d9cdbb64d1384563b05c548f78f2c220024ef61355dcc19f9\": not found" Sep 4 23:56:05.099501 kubelet[3173]: I0904 23:56:05.099459 3173 scope.go:117] "RemoveContainer" containerID="56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f" Sep 4 23:56:05.099726 containerd[1925]: time="2025-09-04T23:56:05.099677612Z" level=error msg="ContainerStatus for \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\": not found" Sep 4 23:56:05.099952 kubelet[3173]: E0904 23:56:05.099824 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\": not found" containerID="56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f" Sep 4 23:56:05.100034 kubelet[3173]: I0904 23:56:05.099946 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f"} err="failed to get container status \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"56dca271721f464e5297b8cc02d73d6e8da9f8138ef819c1e6945ae5734f8c1f\": not found" Sep 4 23:56:05.100034 kubelet[3173]: I0904 23:56:05.099968 3173 scope.go:117] "RemoveContainer" containerID="5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd" Sep 4 23:56:05.100261 containerd[1925]: time="2025-09-04T23:56:05.100186975Z" level=error msg="ContainerStatus for \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\": not found" Sep 4 23:56:05.100313 kubelet[3173]: E0904 23:56:05.100297 3173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\": not found" containerID="5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd" Sep 4 23:56:05.100357 kubelet[3173]: I0904 23:56:05.100320 3173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd"} err="failed to get container status \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b84ce487fc3b0311374d2e6c9300a13aa88df8f96c85c56da765e949970fdfd\": not found" Sep 4 23:56:05.132885 kubelet[3173]: I0904 23:56:05.132822 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-kernel\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.132885 kubelet[3173]: I0904 23:56:05.132866 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-bpf-maps\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.132885 kubelet[3173]: I0904 23:56:05.132901 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b51dd85d-abb8-4199-9fee-c8fd3481d84a-clustermesh-secrets\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.132921 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-etc-cni-netd\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.132936 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-net\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.132951 3173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-xtables-lock\") pod \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\" (UID: \"b51dd85d-abb8-4199-9fee-c8fd3481d84a\") " Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.133001 3173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k728d\" (UniqueName: \"kubernetes.io/projected/13b98d83-4540-46af-a1b3-f74609a0712a-kube-api-access-k728d\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.133014 3173 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-run\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.133024 3173 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hostproc\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133109 kubelet[3173]: I0904 23:56:05.133033 3173 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b98d83-4540-46af-a1b3-f74609a0712a-cilium-config-path\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133042 3173 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-config-path\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133052 3173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6j8r\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-kube-api-access-r6j8r\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133060 3173 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cilium-cgroup\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133068 3173 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b51dd85d-abb8-4199-9fee-c8fd3481d84a-hubble-tls\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133087 3173 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-lib-modules\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133094 3173 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-cni-path\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.133303 kubelet[3173]: I0904 23:56:05.133129 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.133474 kubelet[3173]: I0904 23:56:05.133152 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.133474 kubelet[3173]: I0904 23:56:05.133167 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.133680 kubelet[3173]: I0904 23:56:05.133625 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.133680 kubelet[3173]: I0904 23:56:05.133664 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:56:05.136988 kubelet[3173]: I0904 23:56:05.136955 3173 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dd85d-abb8-4199-9fee-c8fd3481d84a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b51dd85d-abb8-4199-9fee-c8fd3481d84a" (UID: "b51dd85d-abb8-4199-9fee-c8fd3481d84a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233687 3173 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-kernel\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233727 3173 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-bpf-maps\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233737 3173 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b51dd85d-abb8-4199-9fee-c8fd3481d84a-clustermesh-secrets\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233745 3173 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-etc-cni-netd\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233754 3173 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-host-proc-sys-net\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.233835 kubelet[3173]: I0904 23:56:05.233762 3173 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51dd85d-abb8-4199-9fee-c8fd3481d84a-xtables-lock\") on node \"ip-172-31-21-112\" DevicePath \"\"" Sep 4 23:56:05.282336 systemd[1]: Removed slice kubepods-besteffort-pod13b98d83_4540_46af_a1b3_f74609a0712a.slice - libcontainer container kubepods-besteffort-pod13b98d83_4540_46af_a1b3_f74609a0712a.slice. Sep 4 23:56:05.295212 systemd[1]: Removed slice kubepods-burstable-podb51dd85d_abb8_4199_9fee_c8fd3481d84a.slice - libcontainer container kubepods-burstable-podb51dd85d_abb8_4199_9fee_c8fd3481d84a.slice. Sep 4 23:56:05.295353 systemd[1]: kubepods-burstable-podb51dd85d_abb8_4199_9fee_c8fd3481d84a.slice: Consumed 6.755s CPU time, 203M memory peak, 80.1M read from disk, 13.3M written to disk. Sep 4 23:56:05.638149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd-rootfs.mount: Deactivated successfully. Sep 4 23:56:05.638257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18-rootfs.mount: Deactivated successfully. Sep 4 23:56:05.638319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18-shm.mount: Deactivated successfully. Sep 4 23:56:05.638379 systemd[1]: var-lib-kubelet-pods-13b98d83\x2d4540\x2d46af\x2da1b3\x2df74609a0712a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk728d.mount: Deactivated successfully. Sep 4 23:56:05.638440 systemd[1]: var-lib-kubelet-pods-b51dd85d\x2dabb8\x2d4199\x2d9fee\x2dc8fd3481d84a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6j8r.mount: Deactivated successfully. Sep 4 23:56:05.638501 systemd[1]: var-lib-kubelet-pods-b51dd85d\x2dabb8\x2d4199\x2d9fee\x2dc8fd3481d84a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:56:05.638556 systemd[1]: var-lib-kubelet-pods-b51dd85d\x2dabb8\x2d4199\x2d9fee\x2dc8fd3481d84a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:56:05.699400 kubelet[3173]: I0904 23:56:05.699361 3173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b98d83-4540-46af-a1b3-f74609a0712a" path="/var/lib/kubelet/pods/13b98d83-4540-46af-a1b3-f74609a0712a/volumes" Sep 4 23:56:05.699775 kubelet[3173]: I0904 23:56:05.699756 3173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51dd85d-abb8-4199-9fee-c8fd3481d84a" path="/var/lib/kubelet/pods/b51dd85d-abb8-4199-9fee-c8fd3481d84a/volumes" Sep 4 23:56:06.489612 sshd[4881]: Connection closed by 139.178.68.195 port 38784 Sep 4 23:56:06.490847 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Sep 4 23:56:06.494355 systemd[1]: sshd@23-172.31.21.112:22-139.178.68.195:38784.service: Deactivated successfully. Sep 4 23:56:06.496104 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:56:06.496283 systemd[1]: session-24.scope: Consumed 1.008s CPU time, 26.6M memory peak. Sep 4 23:56:06.496843 systemd-logind[1900]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:56:06.497678 systemd-logind[1900]: Removed session 24. Sep 4 23:56:06.526405 systemd[1]: Started sshd@24-172.31.21.112:22-139.178.68.195:38788.service - OpenSSH per-connection server daemon (139.178.68.195:38788). Sep 4 23:56:06.702357 sshd[5040]: Accepted publickey for core from 139.178.68.195 port 38788 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:56:06.703890 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:56:06.710020 systemd-logind[1900]: New session 25 of user core. Sep 4 23:56:06.720029 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:56:06.782606 kubelet[3173]: E0904 23:56:06.782478 3173 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:56:07.000848 ntpd[1892]: Deleting interface #12 lxc_health, fe80::24fb:70ff:fefd:923a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Sep 4 23:56:07.001210 ntpd[1892]: 4 Sep 23:56:06 ntpd[1892]: Deleting interface #12 lxc_health, fe80::24fb:70ff:fefd:923a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Sep 4 23:56:07.962227 sshd[5042]: Connection closed by 139.178.68.195 port 38788 Sep 4 23:56:07.964543 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Sep 4 23:56:07.968551 systemd[1]: sshd@24-172.31.21.112:22-139.178.68.195:38788.service: Deactivated successfully. Sep 4 23:56:07.968902 systemd-logind[1900]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:56:07.973413 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:56:07.977001 systemd-logind[1900]: Removed session 25. Sep 4 23:56:08.003896 systemd[1]: Started sshd@25-172.31.21.112:22-139.178.68.195:38798.service - OpenSSH per-connection server daemon (139.178.68.195:38798). Sep 4 23:56:08.042327 kubelet[3173]: I0904 23:56:08.042292 3173 memory_manager.go:355] "RemoveStaleState removing state" podUID="13b98d83-4540-46af-a1b3-f74609a0712a" containerName="cilium-operator" Sep 4 23:56:08.042327 kubelet[3173]: I0904 23:56:08.042321 3173 memory_manager.go:355] "RemoveStaleState removing state" podUID="b51dd85d-abb8-4199-9fee-c8fd3481d84a" containerName="cilium-agent" Sep 4 23:56:08.063806 kubelet[3173]: I0904 23:56:08.062040 3173 status_manager.go:890] "Failed to get status for pod" podUID="e1341c0c-f37b-464a-9923-1cd46d821d53" pod="kube-system/cilium-rnz5z" err="pods \"cilium-rnz5z\" is forbidden: User \"system:node:ip-172-31-21-112\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-112' and this object" Sep 4 23:56:08.065186 systemd[1]: Created slice kubepods-burstable-pode1341c0c_f37b_464a_9923_1cd46d821d53.slice - libcontainer container kubepods-burstable-pode1341c0c_f37b_464a_9923_1cd46d821d53.slice. Sep 4 23:56:08.156832 kubelet[3173]: I0904 23:56:08.156773 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1341c0c-f37b-464a-9923-1cd46d821d53-cilium-ipsec-secrets\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157104 kubelet[3173]: I0904 23:56:08.156986 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-xtables-lock\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157104 kubelet[3173]: I0904 23:56:08.157008 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1341c0c-f37b-464a-9923-1cd46d821d53-cilium-config-path\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157104 kubelet[3173]: I0904 23:56:08.157051 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1341c0c-f37b-464a-9923-1cd46d821d53-hubble-tls\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157104 kubelet[3173]: I0904 23:56:08.157082 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75g46\" (UniqueName: \"kubernetes.io/projected/e1341c0c-f37b-464a-9923-1cd46d821d53-kube-api-access-75g46\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157421 kubelet[3173]: I0904 23:56:08.157275 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-cilium-run\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157421 kubelet[3173]: I0904 23:56:08.157296 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-lib-modules\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157421 kubelet[3173]: I0904 23:56:08.157377 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-host-proc-sys-kernel\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157421 kubelet[3173]: I0904 23:56:08.157394 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-bpf-maps\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157409 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-hostproc\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157572 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-cni-path\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157589 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-cilium-cgroup\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157621 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-etc-cni-netd\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157635 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1341c0c-f37b-464a-9923-1cd46d821d53-clustermesh-secrets\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.157672 kubelet[3173]: I0904 23:56:08.157649 3173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1341c0c-f37b-464a-9923-1cd46d821d53-host-proc-sys-net\") pod \"cilium-rnz5z\" (UID: \"e1341c0c-f37b-464a-9923-1cd46d821d53\") " pod="kube-system/cilium-rnz5z" Sep 4 23:56:08.184490 sshd[5052]: Accepted publickey for core from 139.178.68.195 port 38798 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:56:08.186013 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:56:08.190643 systemd-logind[1900]: New session 26 of user core. Sep 4 23:56:08.200008 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:56:08.316005 sshd[5054]: Connection closed by 139.178.68.195 port 38798 Sep 4 23:56:08.316499 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Sep 4 23:56:08.320529 systemd[1]: sshd@25-172.31.21.112:22-139.178.68.195:38798.service: Deactivated successfully. Sep 4 23:56:08.322976 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:56:08.326964 systemd-logind[1900]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:56:08.327944 systemd-logind[1900]: Removed session 26. Sep 4 23:56:08.352103 systemd[1]: Started sshd@26-172.31.21.112:22-139.178.68.195:38804.service - OpenSSH per-connection server daemon (139.178.68.195:38804). Sep 4 23:56:08.417142 containerd[1925]: time="2025-09-04T23:56:08.417097073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnz5z,Uid:e1341c0c-f37b-464a-9923-1cd46d821d53,Namespace:kube-system,Attempt:0,}" Sep 4 23:56:08.446285 containerd[1925]: time="2025-09-04T23:56:08.446087211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:56:08.446285 containerd[1925]: time="2025-09-04T23:56:08.446153173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:56:08.446285 containerd[1925]: time="2025-09-04T23:56:08.446168390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:56:08.446509 containerd[1925]: time="2025-09-04T23:56:08.446237448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:56:08.471017 systemd[1]: Started cri-containerd-2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415.scope - libcontainer container 2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415. Sep 4 23:56:08.493934 containerd[1925]: time="2025-09-04T23:56:08.493896651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnz5z,Uid:e1341c0c-f37b-464a-9923-1cd46d821d53,Namespace:kube-system,Attempt:0,} returns sandbox id \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\"" Sep 4 23:56:08.496710 containerd[1925]: time="2025-09-04T23:56:08.496666399Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:56:08.514534 sshd[5065]: Accepted publickey for core from 139.178.68.195 port 38804 ssh2: RSA SHA256:TpITYfQnL/nDXgwdiVCp8iNNsFEU5i6YEA6IiXpTloI Sep 4 23:56:08.516988 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:56:08.519045 containerd[1925]: time="2025-09-04T23:56:08.518729138Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc\"" Sep 4 23:56:08.520134 containerd[1925]: time="2025-09-04T23:56:08.519727266Z" level=info msg="StartContainer for \"c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc\"" Sep 4 23:56:08.525571 systemd-logind[1900]: New session 27 of user core. Sep 4 23:56:08.532978 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:56:08.563972 systemd[1]: Started cri-containerd-c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc.scope - libcontainer container c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc. Sep 4 23:56:08.595392 containerd[1925]: time="2025-09-04T23:56:08.594728152Z" level=info msg="StartContainer for \"c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc\" returns successfully" Sep 4 23:56:08.982167 systemd[1]: cri-containerd-c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc.scope: Deactivated successfully. Sep 4 23:56:08.982758 systemd[1]: cri-containerd-c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc.scope: Consumed 21ms CPU time, 9.3M memory peak, 2.9M read from disk. Sep 4 23:56:09.037635 containerd[1925]: time="2025-09-04T23:56:09.037543969Z" level=info msg="shim disconnected" id=c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc namespace=k8s.io Sep 4 23:56:09.037635 containerd[1925]: time="2025-09-04T23:56:09.037628389Z" level=warning msg="cleaning up after shim disconnected" id=c7cc2453af3dd33e2f583a1c37acdbbfffe20d657c3d25ad657d8160255eb2fc namespace=k8s.io Sep 4 23:56:09.038012 containerd[1925]: time="2025-09-04T23:56:09.037659859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:10.008754 containerd[1925]: time="2025-09-04T23:56:10.008403878Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:56:10.028601 containerd[1925]: time="2025-09-04T23:56:10.028554668Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d\"" Sep 4 23:56:10.029428 containerd[1925]: time="2025-09-04T23:56:10.029202626Z" level=info msg="StartContainer for \"c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d\"" Sep 4 23:56:10.060022 systemd[1]: Started cri-containerd-c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d.scope - libcontainer container c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d. Sep 4 23:56:10.096178 containerd[1925]: time="2025-09-04T23:56:10.096137638Z" level=info msg="StartContainer for \"c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d\" returns successfully" Sep 4 23:56:10.464830 systemd[1]: cri-containerd-c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d.scope: Deactivated successfully. Sep 4 23:56:10.465283 systemd[1]: cri-containerd-c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d.scope: Consumed 18ms CPU time, 7M memory peak, 1.6M read from disk. Sep 4 23:56:10.489750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d-rootfs.mount: Deactivated successfully. Sep 4 23:56:10.504987 containerd[1925]: time="2025-09-04T23:56:10.504929552Z" level=info msg="shim disconnected" id=c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d namespace=k8s.io Sep 4 23:56:10.504987 containerd[1925]: time="2025-09-04T23:56:10.504980815Z" level=warning msg="cleaning up after shim disconnected" id=c59fe7ed1e3e8c7c0fed0586b182b747acaf4cd542bfdaf05466edc449ab8e4d namespace=k8s.io Sep 4 23:56:10.504987 containerd[1925]: time="2025-09-04T23:56:10.504989150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:11.011603 containerd[1925]: time="2025-09-04T23:56:11.011477367Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:56:11.035229 containerd[1925]: time="2025-09-04T23:56:11.035187352Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1\"" Sep 4 23:56:11.035645 containerd[1925]: time="2025-09-04T23:56:11.035622848Z" level=info msg="StartContainer for \"5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1\"" Sep 4 23:56:11.068965 systemd[1]: Started cri-containerd-5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1.scope - libcontainer container 5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1. Sep 4 23:56:11.101678 containerd[1925]: time="2025-09-04T23:56:11.101629478Z" level=info msg="StartContainer for \"5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1\" returns successfully" Sep 4 23:56:11.315432 systemd[1]: cri-containerd-5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1.scope: Deactivated successfully. Sep 4 23:56:11.335622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1-rootfs.mount: Deactivated successfully. Sep 4 23:56:11.350071 containerd[1925]: time="2025-09-04T23:56:11.350007184Z" level=info msg="shim disconnected" id=5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1 namespace=k8s.io Sep 4 23:56:11.350466 containerd[1925]: time="2025-09-04T23:56:11.350277023Z" level=warning msg="cleaning up after shim disconnected" id=5166ae7fb5737c77f06f76fc2dc037c09d733f8b2b9443c022aba2e70e08d7a1 namespace=k8s.io Sep 4 23:56:11.350466 containerd[1925]: time="2025-09-04T23:56:11.350299217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:11.783625 kubelet[3173]: E0904 23:56:11.783587 3173 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:56:12.021706 containerd[1925]: time="2025-09-04T23:56:12.021665447Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:56:12.044943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415314379.mount: Deactivated successfully. Sep 4 23:56:12.046369 containerd[1925]: time="2025-09-04T23:56:12.046326149Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7\"" Sep 4 23:56:12.047610 containerd[1925]: time="2025-09-04T23:56:12.047426827Z" level=info msg="StartContainer for \"d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7\"" Sep 4 23:56:12.090008 systemd[1]: Started cri-containerd-d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7.scope - libcontainer container d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7. Sep 4 23:56:12.118422 systemd[1]: cri-containerd-d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7.scope: Deactivated successfully. Sep 4 23:56:12.122733 containerd[1925]: time="2025-09-04T23:56:12.122615340Z" level=info msg="StartContainer for \"d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7\" returns successfully" Sep 4 23:56:12.144735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7-rootfs.mount: Deactivated successfully. Sep 4 23:56:12.159411 containerd[1925]: time="2025-09-04T23:56:12.159343281Z" level=info msg="shim disconnected" id=d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7 namespace=k8s.io Sep 4 23:56:12.159411 containerd[1925]: time="2025-09-04T23:56:12.159403146Z" level=warning msg="cleaning up after shim disconnected" id=d824d1be61aa04526b530b96d8503a0a640987a6d541000c093def90bf0fabd7 namespace=k8s.io Sep 4 23:56:12.159411 containerd[1925]: time="2025-09-04T23:56:12.159414689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:12.176851 containerd[1925]: time="2025-09-04T23:56:12.176512900Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:56:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:56:12.697265 kubelet[3173]: E0904 23:56:12.697213 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-prws6" podUID="436d9115-d030-43a0-abfe-120fb12689f6" Sep 4 23:56:13.030278 containerd[1925]: time="2025-09-04T23:56:13.029843432Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:56:13.059470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362583955.mount: Deactivated successfully. Sep 4 23:56:13.071117 containerd[1925]: time="2025-09-04T23:56:13.071067619Z" level=info msg="CreateContainer within sandbox \"2557048e8552c57f7c1b1c2df324a000e87aaf79d6ce38eff781d8e0dbbeb415\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88\"" Sep 4 23:56:13.072985 containerd[1925]: time="2025-09-04T23:56:13.072044375Z" level=info msg="StartContainer for \"e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88\"" Sep 4 23:56:13.113960 systemd[1]: Started cri-containerd-e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88.scope - libcontainer container e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88. Sep 4 23:56:13.149455 containerd[1925]: time="2025-09-04T23:56:13.148319302Z" level=info msg="StartContainer for \"e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88\" returns successfully" Sep 4 23:56:13.772326 kubelet[3173]: I0904 23:56:13.772272 3173 setters.go:602] "Node became not ready" node="ip-172-31-21-112" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:56:13Z","lastTransitionTime":"2025-09-04T23:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:56:14.696960 kubelet[3173]: E0904 23:56:14.696900 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-prws6" podUID="436d9115-d030-43a0-abfe-120fb12689f6" Sep 4 23:56:15.050532 kubelet[3173]: I0904 23:56:15.050066 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rnz5z" podStartSLOduration=8.050049978 podStartE2EDuration="8.050049978s" podCreationTimestamp="2025-09-04 23:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:56:15.049833381 +0000 UTC m=+113.466173923" watchObservedRunningTime="2025-09-04 23:56:15.050049978 +0000 UTC m=+113.466390522" Sep 4 23:56:16.697584 kubelet[3173]: E0904 23:56:16.697528 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-prws6" podUID="436d9115-d030-43a0-abfe-120fb12689f6" Sep 4 23:56:17.376818 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:56:17.727252 systemd[1]: run-containerd-runc-k8s.io-e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88-runc.eGlOLx.mount: Deactivated successfully. Sep 4 23:56:20.800407 systemd-networkd[1828]: lxc_health: Link UP Sep 4 23:56:20.801979 systemd-networkd[1828]: lxc_health: Gained carrier Sep 4 23:56:20.804166 (udev-worker)[5912]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:56:20.851165 systemd[1]: run-containerd-runc-k8s.io-e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88-runc.PnCuX7.mount: Deactivated successfully. Sep 4 23:56:21.700815 containerd[1925]: time="2025-09-04T23:56:21.700165820Z" level=info msg="StopPodSandbox for \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\"" Sep 4 23:56:21.700815 containerd[1925]: time="2025-09-04T23:56:21.700276813Z" level=info msg="TearDown network for sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" successfully" Sep 4 23:56:21.700815 containerd[1925]: time="2025-09-04T23:56:21.700291459Z" level=info msg="StopPodSandbox for \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" returns successfully" Sep 4 23:56:21.701872 containerd[1925]: time="2025-09-04T23:56:21.701842941Z" level=info msg="RemovePodSandbox for \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\"" Sep 4 23:56:21.701979 containerd[1925]: time="2025-09-04T23:56:21.701878303Z" level=info msg="Forcibly stopping sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\"" Sep 4 23:56:21.701979 containerd[1925]: time="2025-09-04T23:56:21.701942485Z" level=info msg="TearDown network for sandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" successfully" Sep 4 23:56:21.710688 containerd[1925]: time="2025-09-04T23:56:21.709882774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:56:21.710857 containerd[1925]: time="2025-09-04T23:56:21.710830464Z" level=info msg="RemovePodSandbox \"eddb7ea0528e794339eae6b4d6228afce5a971cd4f3249b6e3ac1bc7066463bd\" returns successfully" Sep 4 23:56:21.716972 containerd[1925]: time="2025-09-04T23:56:21.716299431Z" level=info msg="StopPodSandbox for \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\"" Sep 4 23:56:21.716972 containerd[1925]: time="2025-09-04T23:56:21.716390862Z" level=info msg="TearDown network for sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" successfully" Sep 4 23:56:21.716972 containerd[1925]: time="2025-09-04T23:56:21.716406277Z" level=info msg="StopPodSandbox for \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" returns successfully" Sep 4 23:56:21.716972 containerd[1925]: time="2025-09-04T23:56:21.716853453Z" level=info msg="RemovePodSandbox for \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\"" Sep 4 23:56:21.716972 containerd[1925]: time="2025-09-04T23:56:21.716893754Z" level=info msg="Forcibly stopping sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\"" Sep 4 23:56:21.717213 containerd[1925]: time="2025-09-04T23:56:21.716959402Z" level=info msg="TearDown network for sandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" successfully" Sep 4 23:56:21.723032 containerd[1925]: time="2025-09-04T23:56:21.722997510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:56:21.723121 containerd[1925]: time="2025-09-04T23:56:21.723079895Z" level=info msg="RemovePodSandbox \"d6826ab75f79d77aa58f11a9821e436cce2169308325c2b24cbb8a5b4b236b18\" returns successfully" Sep 4 23:56:22.144076 systemd-networkd[1828]: lxc_health: Gained IPv6LL Sep 4 23:56:25.000921 ntpd[1892]: Listen normally on 15 lxc_health [fe80::9012:f1ff:fe10:551f%14]:123 Sep 4 23:56:25.002191 ntpd[1892]: 4 Sep 23:56:24 ntpd[1892]: Listen normally on 15 lxc_health [fe80::9012:f1ff:fe10:551f%14]:123 Sep 4 23:56:27.472267 systemd[1]: run-containerd-runc-k8s.io-e805ba31b506f448375d1f1c63644d068c0f339efedec64d14e94ea72dad8f88-runc.E3DEs5.mount: Deactivated successfully. Sep 4 23:56:27.549956 sshd[5117]: Connection closed by 139.178.68.195 port 38804 Sep 4 23:56:27.551983 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Sep 4 23:56:27.555685 systemd[1]: sshd@26-172.31.21.112:22-139.178.68.195:38804.service: Deactivated successfully. Sep 4 23:56:27.558221 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:56:27.559155 systemd-logind[1900]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:56:27.560237 systemd-logind[1900]: Removed session 27. Sep 4 23:56:41.698137 systemd[1]: cri-containerd-79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5.scope: Deactivated successfully. Sep 4 23:56:41.699935 systemd[1]: cri-containerd-79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5.scope: Consumed 2.726s CPU time, 84.2M memory peak, 39.5M read from disk. Sep 4 23:56:41.721150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5-rootfs.mount: Deactivated successfully. Sep 4 23:56:41.743645 containerd[1925]: time="2025-09-04T23:56:41.743576328Z" level=info msg="shim disconnected" id=79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5 namespace=k8s.io Sep 4 23:56:41.743645 containerd[1925]: time="2025-09-04T23:56:41.743623619Z" level=warning msg="cleaning up after shim disconnected" id=79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5 namespace=k8s.io Sep 4 23:56:41.743645 containerd[1925]: time="2025-09-04T23:56:41.743632007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:41.757774 containerd[1925]: time="2025-09-04T23:56:41.757736796Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:56:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:56:42.097416 kubelet[3173]: I0904 23:56:42.097151 3173 scope.go:117] "RemoveContainer" containerID="79212ca014d5fb528d91a74addab7e92b5d7391373f2be21a6238637c823dfd5" Sep 4 23:56:42.101949 containerd[1925]: time="2025-09-04T23:56:42.101914879Z" level=info msg="CreateContainer within sandbox \"003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:56:42.162966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257096780.mount: Deactivated successfully. Sep 4 23:56:42.171224 containerd[1925]: time="2025-09-04T23:56:42.171174463Z" level=info msg="CreateContainer within sandbox \"003bc2e15ee0c6c3173f7b8b527578d2cc12e4e50d256856296703f71883cfa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2955127a487a8182798db08e17662bfae5221414ec6d9fb00bbd10062fbb55a4\"" Sep 4 23:56:42.171825 containerd[1925]: time="2025-09-04T23:56:42.171713418Z" level=info msg="StartContainer for \"2955127a487a8182798db08e17662bfae5221414ec6d9fb00bbd10062fbb55a4\"" Sep 4 23:56:42.204965 systemd[1]: Started cri-containerd-2955127a487a8182798db08e17662bfae5221414ec6d9fb00bbd10062fbb55a4.scope - libcontainer container 2955127a487a8182798db08e17662bfae5221414ec6d9fb00bbd10062fbb55a4. Sep 4 23:56:42.253941 containerd[1925]: time="2025-09-04T23:56:42.253901455Z" level=info msg="StartContainer for \"2955127a487a8182798db08e17662bfae5221414ec6d9fb00bbd10062fbb55a4\" returns successfully" Sep 4 23:56:44.640869 kubelet[3173]: E0904 23:56:44.640628 3173 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": context deadline exceeded" Sep 4 23:56:46.932387 systemd[1]: cri-containerd-a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91.scope: Deactivated successfully. Sep 4 23:56:46.932648 systemd[1]: cri-containerd-a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91.scope: Consumed 1.468s CPU time, 29.1M memory peak, 12.3M read from disk. Sep 4 23:56:46.953175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91-rootfs.mount: Deactivated successfully. Sep 4 23:56:46.978850 containerd[1925]: time="2025-09-04T23:56:46.978793624Z" level=info msg="shim disconnected" id=a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91 namespace=k8s.io Sep 4 23:56:46.978850 containerd[1925]: time="2025-09-04T23:56:46.978842009Z" level=warning msg="cleaning up after shim disconnected" id=a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91 namespace=k8s.io Sep 4 23:56:46.978850 containerd[1925]: time="2025-09-04T23:56:46.978850510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:56:47.107033 kubelet[3173]: I0904 23:56:47.107004 3173 scope.go:117] "RemoveContainer" containerID="a06ee6db3834a121d5074f29d0f8719f0aea4b25e79f0f0063c4dfdc8441fc91" Sep 4 23:56:47.109512 containerd[1925]: time="2025-09-04T23:56:47.109385813Z" level=info msg="CreateContainer within sandbox \"78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 23:56:47.137905 containerd[1925]: time="2025-09-04T23:56:47.137865509Z" level=info msg="CreateContainer within sandbox \"78c88af3778483d42aba830f17f29683ba76d310ae2053bcfb004c7eb43b5a88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"89a4c229e4531f7b9905df619632767ea68298a84ef89f20051280e5f10c2792\"" Sep 4 23:56:47.138335 containerd[1925]: time="2025-09-04T23:56:47.138314523Z" level=info msg="StartContainer for \"89a4c229e4531f7b9905df619632767ea68298a84ef89f20051280e5f10c2792\"" Sep 4 23:56:47.169983 systemd[1]: Started cri-containerd-89a4c229e4531f7b9905df619632767ea68298a84ef89f20051280e5f10c2792.scope - libcontainer container 89a4c229e4531f7b9905df619632767ea68298a84ef89f20051280e5f10c2792. Sep 4 23:56:47.210975 containerd[1925]: time="2025-09-04T23:56:47.210860621Z" level=info msg="StartContainer for \"89a4c229e4531f7b9905df619632767ea68298a84ef89f20051280e5f10c2792\" returns successfully" Sep 4 23:56:54.641639 kubelet[3173]: E0904 23:56:54.641584 3173 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-112?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"