Sep 12 10:11:43.925578 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:11:43.925605 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:11:43.925617 kernel: BIOS-provided physical RAM map: Sep 12 10:11:43.925624 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 10:11:43.925631 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 10:11:43.925637 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 10:11:43.925646 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 10:11:43.925653 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 10:11:43.925659 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 10:11:43.925666 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 10:11:43.925676 kernel: NX (Execute Disable) protection: active Sep 12 10:11:43.925683 kernel: APIC: Static calls initialized Sep 12 10:11:43.925690 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 10:11:43.925697 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 10:11:43.925706 kernel: extended physical RAM map: Sep 12 10:11:43.925713 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 10:11:43.925724 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 10:11:43.925731 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 10:11:43.925739 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 10:11:43.925747 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 10:11:43.925754 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 10:11:43.925762 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 10:11:43.925769 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 10:11:43.925777 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 10:11:43.925784 kernel: efi: EFI v2.7 by EDK II Sep 12 10:11:43.925792 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 10:11:43.925803 kernel: secureboot: Secure boot disabled Sep 12 10:11:43.925810 kernel: SMBIOS 2.7 present. Sep 12 10:11:43.925818 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 10:11:43.925825 kernel: Hypervisor detected: KVM Sep 12 10:11:43.925833 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:11:43.925841 kernel: kvm-clock: using sched offset of 4551837079 cycles Sep 12 10:11:43.925849 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:11:43.925857 kernel: tsc: Detected 2499.996 MHz processor Sep 12 10:11:43.925865 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:11:43.925873 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:11:43.925881 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 10:11:43.925891 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 10:11:43.925899 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:11:43.925907 kernel: Using GB pages for direct mapping Sep 12 10:11:43.925919 kernel: ACPI: Early table checksum verification disabled Sep 12 10:11:43.925927 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 10:11:43.925935 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 10:11:43.925946 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 10:11:43.925954 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 10:11:43.925963 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 10:11:43.925971 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 10:11:43.925979 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 10:11:43.925988 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 10:11:43.925996 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 10:11:43.926004 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 10:11:43.926015 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 10:11:43.926023 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 10:11:43.926031 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 10:11:43.926039 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 10:11:43.926048 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 10:11:43.926056 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 10:11:43.926064 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 10:11:43.926073 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 10:11:43.926081 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 10:11:43.926092 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 10:11:43.926100 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 10:11:43.926108 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 10:11:43.926116 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 10:11:43.926948 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 10:11:43.926961 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 10:11:43.926970 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 10:11:43.926979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 10:11:43.926989 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 10:11:43.927003 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 12 10:11:43.927337 kernel: Zone ranges: Sep 12 10:11:43.927347 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:11:43.927355 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 10:11:43.927363 kernel: Normal empty Sep 12 10:11:43.927372 kernel: Movable zone start for each node Sep 12 10:11:43.927380 kernel: Early memory node ranges Sep 12 10:11:43.927389 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 10:11:43.927397 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 10:11:43.927410 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 10:11:43.927418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 10:11:43.927427 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:11:43.927435 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 10:11:43.927443 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 10:11:43.927452 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 10:11:43.927460 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 10:11:43.927469 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:11:43.927478 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 10:11:43.927489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:11:43.927497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:11:43.927506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:11:43.927514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:11:43.927522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:11:43.927531 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 10:11:43.927539 kernel: TSC deadline timer available Sep 12 10:11:43.927548 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 10:11:43.927556 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 10:11:43.927565 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 10:11:43.927575 kernel: Booting paravirtualized kernel on KVM Sep 12 10:11:43.927584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:11:43.927593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 10:11:43.927601 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 10:11:43.927610 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 10:11:43.927618 kernel: pcpu-alloc: [0] 0 1 Sep 12 10:11:43.927626 kernel: kvm-guest: PV spinlocks enabled Sep 12 10:11:43.927635 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:11:43.927647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:11:43.927657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:11:43.927665 kernel: random: crng init done Sep 12 10:11:43.927673 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 10:11:43.927681 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 10:11:43.927690 kernel: Fallback order for Node 0: 0 Sep 12 10:11:43.927698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 12 10:11:43.927706 kernel: Policy zone: DMA32 Sep 12 10:11:43.927718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:11:43.927726 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 165012K reserved, 0K cma-reserved) Sep 12 10:11:43.927735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 10:11:43.927743 kernel: Kernel/User page tables isolation: enabled Sep 12 10:11:43.927752 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:11:43.927769 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:11:43.927780 kernel: Dynamic Preempt: voluntary Sep 12 10:11:43.927789 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:11:43.927799 kernel: rcu: RCU event tracing is enabled. Sep 12 10:11:43.927808 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 10:11:43.927817 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:11:43.927826 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:11:43.927837 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:11:43.927846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:11:43.927855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 10:11:43.927864 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 10:11:43.927873 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:11:43.927885 kernel: Console: colour dummy device 80x25 Sep 12 10:11:43.927894 kernel: printk: console [tty0] enabled Sep 12 10:11:43.927902 kernel: printk: console [ttyS0] enabled Sep 12 10:11:43.927911 kernel: ACPI: Core revision 20230628 Sep 12 10:11:43.927920 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 10:11:43.927929 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:11:43.927938 kernel: x2apic enabled Sep 12 10:11:43.927947 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:11:43.927956 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 10:11:43.927968 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 12 10:11:43.927977 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 10:11:43.927986 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 10:11:43.927995 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:11:43.928003 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:11:43.928012 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:11:43.928021 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 10:11:43.928030 kernel: RETBleed: Vulnerable Sep 12 10:11:43.928038 kernel: Speculative Store Bypass: Vulnerable Sep 12 10:11:43.928047 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:11:43.928058 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:11:43.928067 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 10:11:43.928076 kernel: active return thunk: its_return_thunk Sep 12 10:11:43.928084 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 10:11:43.928093 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:11:43.928102 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:11:43.928111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:11:43.928119 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 10:11:43.929907 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 10:11:43.929924 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 10:11:43.929934 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 10:11:43.929950 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 10:11:43.929959 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 10:11:43.929968 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:11:43.929977 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 10:11:43.929986 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 10:11:43.929995 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 10:11:43.930004 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 10:11:43.930012 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 10:11:43.930021 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 10:11:43.930030 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 10:11:43.930039 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:11:43.930048 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:11:43.930059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:11:43.930068 kernel: landlock: Up and running. Sep 12 10:11:43.930077 kernel: SELinux: Initializing. Sep 12 10:11:43.930086 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 10:11:43.930095 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 10:11:43.930104 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 10:11:43.930113 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:11:43.930139 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:11:43.930148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:11:43.930158 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 10:11:43.930170 kernel: signal: max sigframe size: 3632 Sep 12 10:11:43.930179 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:11:43.930189 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:11:43.930198 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 10:11:43.930207 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:11:43.930216 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:11:43.930225 kernel: .... node #0, CPUs: #1 Sep 12 10:11:43.930235 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 10:11:43.930245 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 10:11:43.930257 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 10:11:43.930266 kernel: smpboot: Max logical packages: 1 Sep 12 10:11:43.930275 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 12 10:11:43.930284 kernel: devtmpfs: initialized Sep 12 10:11:43.930293 kernel: x86/mm: Memory block size: 128MB Sep 12 10:11:43.930302 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 10:11:43.930311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:11:43.930321 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 10:11:43.930332 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:11:43.930341 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:11:43.930350 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:11:43.930359 kernel: audit: type=2000 audit(1757671903.854:1): state=initialized audit_enabled=0 res=1 Sep 12 10:11:43.930368 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:11:43.930377 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:11:43.930386 kernel: cpuidle: using governor menu Sep 12 10:11:43.930395 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:11:43.930404 kernel: dca service started, version 1.12.1 Sep 12 10:11:43.930416 kernel: PCI: Using configuration type 1 for base access Sep 12 10:11:43.930425 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:11:43.930434 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:11:43.930443 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:11:43.930452 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:11:43.930461 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:11:43.930470 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:11:43.930479 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:11:43.930488 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:11:43.930499 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 10:11:43.930508 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:11:43.930517 kernel: ACPI: Interpreter enabled Sep 12 10:11:43.930526 kernel: ACPI: PM: (supports S0 S5) Sep 12 10:11:43.930535 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:11:43.930544 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:11:43.930553 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 10:11:43.930562 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 10:11:43.930571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:11:43.930747 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:11:43.930852 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 10:11:43.930948 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 10:11:43.930960 kernel: acpiphp: Slot [3] registered Sep 12 10:11:43.930969 kernel: acpiphp: Slot [4] registered Sep 12 10:11:43.930978 kernel: acpiphp: Slot [5] registered Sep 12 10:11:43.930987 kernel: acpiphp: Slot [6] registered Sep 12 10:11:43.930996 kernel: acpiphp: Slot [7] registered Sep 12 10:11:43.931008 kernel: acpiphp: Slot [8] registered Sep 12 10:11:43.931017 kernel: acpiphp: Slot [9] registered Sep 12 10:11:43.931040 kernel: acpiphp: Slot [10] registered Sep 12 10:11:43.931053 kernel: acpiphp: Slot [11] registered Sep 12 10:11:43.931066 kernel: acpiphp: Slot [12] registered Sep 12 10:11:43.931079 kernel: acpiphp: Slot [13] registered Sep 12 10:11:43.931092 kernel: acpiphp: Slot [14] registered Sep 12 10:11:43.931105 kernel: acpiphp: Slot [15] registered Sep 12 10:11:43.931113 kernel: acpiphp: Slot [16] registered Sep 12 10:11:43.931552 kernel: acpiphp: Slot [17] registered Sep 12 10:11:43.931564 kernel: acpiphp: Slot [18] registered Sep 12 10:11:43.931573 kernel: acpiphp: Slot [19] registered Sep 12 10:11:43.931582 kernel: acpiphp: Slot [20] registered Sep 12 10:11:43.931591 kernel: acpiphp: Slot [21] registered Sep 12 10:11:43.931600 kernel: acpiphp: Slot [22] registered Sep 12 10:11:43.931609 kernel: acpiphp: Slot [23] registered Sep 12 10:11:43.931618 kernel: acpiphp: Slot [24] registered Sep 12 10:11:43.931627 kernel: acpiphp: Slot [25] registered Sep 12 10:11:43.931641 kernel: acpiphp: Slot [26] registered Sep 12 10:11:43.931650 kernel: acpiphp: Slot [27] registered Sep 12 10:11:43.931659 kernel: acpiphp: Slot [28] registered Sep 12 10:11:43.931668 kernel: acpiphp: Slot [29] registered Sep 12 10:11:43.931676 kernel: acpiphp: Slot [30] registered Sep 12 10:11:43.931685 kernel: acpiphp: Slot [31] registered Sep 12 10:11:43.931694 kernel: PCI host bridge to bus 0000:00 Sep 12 10:11:43.931827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:11:43.931915 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:11:43.932004 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:11:43.932088 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 10:11:43.932970 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 10:11:43.933068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:11:43.933250 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 10:11:43.933364 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 10:11:43.933477 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 12 10:11:43.933573 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 10:11:43.933669 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 10:11:43.933764 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 10:11:43.933857 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 10:11:43.933951 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 10:11:43.934045 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 10:11:43.935236 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 10:11:43.935363 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 12 10:11:43.935463 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 12 10:11:43.935560 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 10:11:43.935656 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 12 10:11:43.935750 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 10:11:43.935852 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 10:11:43.935954 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 12 10:11:43.936054 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 10:11:43.937260 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 12 10:11:43.937280 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:11:43.937290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:11:43.937300 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:11:43.937309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:11:43.937323 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 10:11:43.937333 kernel: iommu: Default domain type: Translated Sep 12 10:11:43.937342 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:11:43.937351 kernel: efivars: Registered efivars operations Sep 12 10:11:43.937360 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:11:43.937369 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:11:43.937378 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 10:11:43.937387 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 10:11:43.937396 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 10:11:43.937501 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 10:11:43.937603 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 10:11:43.937697 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 10:11:43.937709 kernel: vgaarb: loaded Sep 12 10:11:43.937720 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 10:11:43.937732 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 10:11:43.937741 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:11:43.937750 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:11:43.937760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:11:43.937772 kernel: pnp: PnP ACPI init Sep 12 10:11:43.937781 kernel: pnp: PnP ACPI: found 5 devices Sep 12 10:11:43.937790 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:11:43.937800 kernel: NET: Registered PF_INET protocol family Sep 12 10:11:43.937809 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 10:11:43.937818 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 10:11:43.937827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:11:43.937837 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:11:43.937846 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 10:11:43.937858 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 10:11:43.937867 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 10:11:43.937877 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 10:11:43.937886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:11:43.937895 kernel: NET: Registered PF_XDP protocol family Sep 12 10:11:43.937987 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:11:43.938079 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:11:43.939315 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:11:43.939421 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 10:11:43.939507 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 10:11:43.939610 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 10:11:43.939623 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:11:43.939633 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 10:11:43.939643 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 10:11:43.939652 kernel: clocksource: Switched to clocksource tsc Sep 12 10:11:43.939661 kernel: Initialise system trusted keyrings Sep 12 10:11:43.939671 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 10:11:43.939683 kernel: Key type asymmetric registered Sep 12 10:11:43.939692 kernel: Asymmetric key parser 'x509' registered Sep 12 10:11:43.939701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:11:43.939710 kernel: io scheduler mq-deadline registered Sep 12 10:11:43.939719 kernel: io scheduler kyber registered Sep 12 10:11:43.939728 kernel: io scheduler bfq registered Sep 12 10:11:43.939738 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:11:43.939747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:11:43.939756 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:11:43.939768 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:11:43.939777 kernel: i8042: Warning: Keylock active Sep 12 10:11:43.939786 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:11:43.939795 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:11:43.939898 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 10:11:43.939988 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 10:11:43.940075 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T10:11:43 UTC (1757671903) Sep 12 10:11:43.940176 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 10:11:43.940192 kernel: intel_pstate: CPU model not supported Sep 12 10:11:43.940201 kernel: efifb: probing for efifb Sep 12 10:11:43.940210 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 10:11:43.940238 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 10:11:43.940249 kernel: efifb: scrolling: redraw Sep 12 10:11:43.940259 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 10:11:43.940269 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 10:11:43.940278 kernel: fb0: EFI VGA frame buffer device Sep 12 10:11:43.940288 kernel: pstore: Using crash dump compression: deflate Sep 12 10:11:43.940300 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 10:11:43.940310 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:11:43.940320 kernel: Segment Routing with IPv6 Sep 12 10:11:43.940329 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:11:43.940339 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:11:43.940348 kernel: Key type dns_resolver registered Sep 12 10:11:43.940358 kernel: IPI shorthand broadcast: enabled Sep 12 10:11:43.940367 kernel: sched_clock: Marking stable (473002437, 147210934)->(707007877, -86794506) Sep 12 10:11:43.940377 kernel: registered taskstats version 1 Sep 12 10:11:43.940389 kernel: Loading compiled-in X.509 certificates Sep 12 10:11:43.940398 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:11:43.940408 kernel: Key type .fscrypt registered Sep 12 10:11:43.940417 kernel: Key type fscrypt-provisioning registered Sep 12 10:11:43.940427 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:11:43.940437 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:11:43.940446 kernel: ima: No architecture policies found Sep 12 10:11:43.940456 kernel: clk: Disabling unused clocks Sep 12 10:11:43.940468 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:11:43.940480 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:11:43.940489 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:11:43.940499 kernel: Run /init as init process Sep 12 10:11:43.940508 kernel: with arguments: Sep 12 10:11:43.940518 kernel: /init Sep 12 10:11:43.940527 kernel: with environment: Sep 12 10:11:43.940537 kernel: HOME=/ Sep 12 10:11:43.940546 kernel: TERM=linux Sep 12 10:11:43.940558 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:11:43.940569 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:11:43.940582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:11:43.940593 systemd[1]: Detected virtualization amazon. Sep 12 10:11:43.940602 systemd[1]: Detected architecture x86-64. Sep 12 10:11:43.940615 systemd[1]: Running in initrd. Sep 12 10:11:43.940625 systemd[1]: No hostname configured, using default hostname. Sep 12 10:11:43.940635 systemd[1]: Hostname set to . Sep 12 10:11:43.940645 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:11:43.940655 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:11:43.940665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:11:43.940674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:11:43.940688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:11:43.940698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:11:43.940708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:11:43.940719 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:11:43.940730 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:11:43.940740 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:11:43.940750 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:11:43.940763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:11:43.940773 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:11:43.940782 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:11:43.940792 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:11:43.940802 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:11:43.940812 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:11:43.940822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:11:43.940832 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:11:43.940842 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:11:43.940854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:11:43.940864 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:11:43.940874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:11:43.940884 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:11:43.940894 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:11:43.940904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:11:43.940913 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:11:43.940924 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:11:43.940936 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:11:43.940947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:11:43.940957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:11:43.940967 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:11:43.941002 systemd-journald[179]: Collecting audit messages is disabled. Sep 12 10:11:43.941029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:11:43.941040 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:11:43.941051 systemd-journald[179]: Journal started Sep 12 10:11:43.941075 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2695c16e81e8a3f30433e9f0ec8b8b) is 4.7M, max 38.2M, 33.4M free. Sep 12 10:11:43.943732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:11:43.922036 systemd-modules-load[180]: Inserted module 'overlay' Sep 12 10:11:43.960153 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:11:43.963149 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:11:43.963207 kernel: Bridge firewalling registered Sep 12 10:11:43.963788 systemd-modules-load[180]: Inserted module 'br_netfilter' Sep 12 10:11:43.965610 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:11:43.966653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:43.968228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:11:43.973336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:11:43.975169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:11:43.977288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:11:43.980555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:11:43.992351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:11:43.999092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:11:44.005612 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:11:44.008303 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:11:44.008958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:11:44.010835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:11:44.019783 dracut-cmdline[212]: dracut-dracut-053 Sep 12 10:11:44.020457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:11:44.025751 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:11:44.069498 systemd-resolved[219]: Positive Trust Anchors: Sep 12 10:11:44.069515 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:11:44.069578 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:11:44.078019 systemd-resolved[219]: Defaulting to hostname 'linux'. Sep 12 10:11:44.081530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:11:44.082273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:11:44.135765 kernel: SCSI subsystem initialized Sep 12 10:11:44.147154 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:11:44.160163 kernel: iscsi: registered transport (tcp) Sep 12 10:11:44.185201 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:11:44.185286 kernel: QLogic iSCSI HBA Driver Sep 12 10:11:44.232969 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:11:44.240312 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:11:44.266520 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:11:44.266601 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:11:44.266624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:11:44.310189 kernel: raid6: avx512x4 gen() 17330 MB/s Sep 12 10:11:44.328187 kernel: raid6: avx512x2 gen() 17872 MB/s Sep 12 10:11:44.346176 kernel: raid6: avx512x1 gen() 17983 MB/s Sep 12 10:11:44.364174 kernel: raid6: avx2x4 gen() 17934 MB/s Sep 12 10:11:44.382161 kernel: raid6: avx2x2 gen() 17944 MB/s Sep 12 10:11:44.400356 kernel: raid6: avx2x1 gen() 13724 MB/s Sep 12 10:11:44.400401 kernel: raid6: using algorithm avx512x1 gen() 17983 MB/s Sep 12 10:11:44.419394 kernel: raid6: .... xor() 21902 MB/s, rmw enabled Sep 12 10:11:44.419442 kernel: raid6: using avx512x2 recovery algorithm Sep 12 10:11:44.441163 kernel: xor: automatically using best checksumming function avx Sep 12 10:11:44.596160 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:11:44.606697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:11:44.611361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:11:44.628761 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 12 10:11:44.634794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:11:44.641306 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:11:44.663547 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 12 10:11:44.693764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:11:44.698349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:11:44.752531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:11:44.762347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:11:44.792734 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:11:44.794116 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:11:44.794767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:11:44.798239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:11:44.807496 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:11:44.831557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:11:44.856159 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:11:44.875349 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 10:11:44.875631 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 10:11:44.888149 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 10:11:44.888485 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:11:44.890326 kernel: AES CTR mode by8 optimization enabled Sep 12 10:11:44.896098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:11:44.897108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:11:44.898977 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:11:44.902350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:11:44.907640 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:21:78:b6:44:21 Sep 12 10:11:44.902562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:44.904173 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:11:44.905643 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:11:44.917580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:11:44.919902 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:11:44.932778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:11:44.932909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:44.940270 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 10:11:44.940538 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 10:11:44.941377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:11:44.957160 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 10:11:44.964621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:44.971877 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:11:44.971915 kernel: GPT:9289727 != 16777215 Sep 12 10:11:44.971935 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:11:44.971954 kernel: GPT:9289727 != 16777215 Sep 12 10:11:44.971973 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:11:44.972003 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 10:11:44.978435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:11:44.996025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:11:45.089731 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (460) Sep 12 10:11:45.103177 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (448) Sep 12 10:11:45.140838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 10:11:45.191685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 10:11:45.203042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 10:11:45.213041 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 10:11:45.213604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 10:11:45.220315 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:11:45.228675 disk-uuid[631]: Primary Header is updated. Sep 12 10:11:45.228675 disk-uuid[631]: Secondary Entries is updated. Sep 12 10:11:45.228675 disk-uuid[631]: Secondary Header is updated. Sep 12 10:11:45.237260 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 10:11:45.250165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 10:11:46.256808 disk-uuid[632]: The operation has completed successfully. Sep 12 10:11:46.258257 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 10:11:46.378021 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:11:46.378134 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:11:46.423345 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:11:46.428561 sh[890]: Success Sep 12 10:11:46.449511 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 10:11:46.573874 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:11:46.583264 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:11:46.585329 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:11:46.619805 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:11:46.619873 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:11:46.619888 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:11:46.623276 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:11:46.623355 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:11:46.749157 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 10:11:46.764373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:11:46.765437 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:11:46.777506 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:11:46.780322 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:11:46.818342 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:11:46.818416 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:11:46.818438 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 10:11:46.840152 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 10:11:46.847162 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:11:46.850253 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:11:46.856402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:11:46.887812 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:11:46.894357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:11:46.920643 systemd-networkd[1080]: lo: Link UP Sep 12 10:11:46.920653 systemd-networkd[1080]: lo: Gained carrier Sep 12 10:11:46.922499 systemd-networkd[1080]: Enumeration completed Sep 12 10:11:46.922634 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:11:46.922917 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:11:46.922923 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:11:46.923371 systemd[1]: Reached target network.target - Network. Sep 12 10:11:46.926954 systemd-networkd[1080]: eth0: Link UP Sep 12 10:11:46.926960 systemd-networkd[1080]: eth0: Gained carrier Sep 12 10:11:46.926974 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:11:46.946249 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 10:11:47.268214 ignition[1036]: Ignition 2.20.0 Sep 12 10:11:47.268225 ignition[1036]: Stage: fetch-offline Sep 12 10:11:47.268398 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:47.268406 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:47.269774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:11:47.268724 ignition[1036]: Ignition finished successfully Sep 12 10:11:47.275344 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 10:11:47.288750 ignition[1091]: Ignition 2.20.0 Sep 12 10:11:47.288764 ignition[1091]: Stage: fetch Sep 12 10:11:47.289240 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:47.289254 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:47.289384 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:47.316353 ignition[1091]: PUT result: OK Sep 12 10:11:47.318397 ignition[1091]: parsed url from cmdline: "" Sep 12 10:11:47.318494 ignition[1091]: no config URL provided Sep 12 10:11:47.318508 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:11:47.318520 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:11:47.318543 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:47.319952 ignition[1091]: PUT result: OK Sep 12 10:11:47.319997 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 10:11:47.320676 ignition[1091]: GET result: OK Sep 12 10:11:47.320735 ignition[1091]: parsing config with SHA512: 0df0d08f305cace7d0ef235c2e655a480d90a5a340e28a65a4f0fbb6734e452a7f30697e85a11b8a3a392d2f8c17549e8441eb0689f7d65e588df13c04902f27 Sep 12 10:11:47.325054 unknown[1091]: fetched base config from "system" Sep 12 10:11:47.325070 unknown[1091]: fetched base config from "system" Sep 12 10:11:47.325919 ignition[1091]: fetch: fetch complete Sep 12 10:11:47.325075 unknown[1091]: fetched user config from "aws" Sep 12 10:11:47.325929 ignition[1091]: fetch: fetch passed Sep 12 10:11:47.328279 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 10:11:47.325977 ignition[1091]: Ignition finished successfully Sep 12 10:11:47.333341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:11:47.348772 ignition[1097]: Ignition 2.20.0 Sep 12 10:11:47.348784 ignition[1097]: Stage: kargs Sep 12 10:11:47.349090 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:47.349105 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:47.349213 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:47.349984 ignition[1097]: PUT result: OK Sep 12 10:11:47.352633 ignition[1097]: kargs: kargs passed Sep 12 10:11:47.352694 ignition[1097]: Ignition finished successfully Sep 12 10:11:47.354088 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:11:47.359351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:11:47.374656 ignition[1104]: Ignition 2.20.0 Sep 12 10:11:47.374670 ignition[1104]: Stage: disks Sep 12 10:11:47.375230 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:47.375246 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:47.375372 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:47.376257 ignition[1104]: PUT result: OK Sep 12 10:11:47.378747 ignition[1104]: disks: disks passed Sep 12 10:11:47.378824 ignition[1104]: Ignition finished successfully Sep 12 10:11:47.380143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:11:47.380933 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:11:47.381282 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:11:47.381526 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:11:47.381759 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:11:47.381981 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:11:47.390524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:11:47.430575 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 10:11:47.433521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:11:47.438354 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:11:47.541160 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:11:47.541546 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:11:47.542433 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:11:47.563297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:11:47.566201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:11:47.567383 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 10:11:47.567454 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:11:47.567486 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:11:47.578930 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:11:47.586347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:11:47.594164 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1131) Sep 12 10:11:47.598160 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:11:47.598239 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:11:47.599470 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 10:11:47.615156 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 10:11:47.618032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:11:47.942736 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:11:47.978569 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:11:47.996614 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:11:48.012503 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:11:48.104272 systemd-networkd[1080]: eth0: Gained IPv6LL Sep 12 10:11:48.293102 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:11:48.298241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:11:48.301772 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:11:48.308849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:11:48.310141 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:11:48.331758 ignition[1245]: INFO : Ignition 2.20.0 Sep 12 10:11:48.333247 ignition[1245]: INFO : Stage: mount Sep 12 10:11:48.333247 ignition[1245]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:48.333247 ignition[1245]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:48.335476 ignition[1245]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:48.337271 ignition[1245]: INFO : PUT result: OK Sep 12 10:11:48.340920 ignition[1245]: INFO : mount: mount passed Sep 12 10:11:48.343167 ignition[1245]: INFO : Ignition finished successfully Sep 12 10:11:48.344390 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:11:48.352429 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:11:48.357118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:11:48.368406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:11:48.392186 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1257) Sep 12 10:11:48.395244 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:11:48.395305 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:11:48.397727 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 10:11:48.404171 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 10:11:48.406532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:11:48.424946 ignition[1274]: INFO : Ignition 2.20.0 Sep 12 10:11:48.424946 ignition[1274]: INFO : Stage: files Sep 12 10:11:48.426042 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:48.426042 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:48.426042 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:48.427336 ignition[1274]: INFO : PUT result: OK Sep 12 10:11:48.429844 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:11:48.431323 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:11:48.431323 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:11:48.468464 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:11:48.469196 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:11:48.469196 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:11:48.468820 unknown[1274]: wrote ssh authorized keys file for user: core Sep 12 10:11:48.470821 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:11:48.470821 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 10:11:48.555371 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:11:48.960255 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:11:48.960255 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:11:48.961946 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:11:49.171138 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:11:49.308262 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:11:49.308262 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:11:49.310681 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 10:11:49.772152 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:11:53.087493 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:11:53.087493 ignition[1274]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:11:53.089630 ignition[1274]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:11:53.090414 ignition[1274]: INFO : files: files passed Sep 12 10:11:53.090414 ignition[1274]: INFO : Ignition finished successfully Sep 12 10:11:53.091568 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:11:53.102351 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:11:53.104573 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:11:53.107955 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:11:53.108355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:11:53.118809 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:11:53.118809 initrd-setup-root-after-ignition[1302]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:11:53.120986 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:11:53.121854 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:11:53.122706 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:11:53.129345 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:11:53.152038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:11:53.152154 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:11:53.153258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:11:53.154132 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:11:53.154862 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:11:53.159279 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:11:53.172622 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:11:53.178360 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:11:53.189976 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:11:53.190685 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:11:53.191801 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:11:53.192659 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:11:53.192847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:11:53.193955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:11:53.194786 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:11:53.195656 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:11:53.196418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:11:53.197173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:11:53.197935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:11:53.198714 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:11:53.199596 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:11:53.200745 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:11:53.201497 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:11:53.202213 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:11:53.202404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:11:53.203562 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:11:53.204358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:11:53.205027 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:11:53.205763 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:11:53.206238 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:11:53.206415 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:11:53.207910 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:11:53.208115 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:11:53.208822 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:11:53.208997 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:11:53.217382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:11:53.220387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:11:53.223414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:11:53.224279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:11:53.225023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:11:53.227584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:11:53.237782 ignition[1326]: INFO : Ignition 2.20.0 Sep 12 10:11:53.237782 ignition[1326]: INFO : Stage: umount Sep 12 10:11:53.246299 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:11:53.246299 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 10:11:53.246299 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 10:11:53.246299 ignition[1326]: INFO : PUT result: OK Sep 12 10:11:53.246299 ignition[1326]: INFO : umount: umount passed Sep 12 10:11:53.246299 ignition[1326]: INFO : Ignition finished successfully Sep 12 10:11:53.241710 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:11:53.241823 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:11:53.248476 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:11:53.248605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:11:53.249246 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:11:53.249321 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:11:53.250635 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:11:53.250691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:11:53.251249 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 10:11:53.251297 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 10:11:53.251624 systemd[1]: Stopped target network.target - Network. Sep 12 10:11:53.251921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:11:53.251972 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:11:53.254569 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:11:53.254821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:11:53.254901 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:11:53.255262 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:11:53.255701 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:11:53.256013 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:11:53.256065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:11:53.256368 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:11:53.256409 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:11:53.256682 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:11:53.256730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:11:53.257018 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:11:53.257055 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:11:53.259380 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:11:53.260197 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:11:53.262350 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:11:53.267932 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:11:53.268079 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:11:53.272391 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:11:53.272746 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:11:53.272881 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:11:53.276415 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:11:53.276758 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:11:53.276889 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:11:53.279371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:11:53.279448 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:11:53.280089 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:11:53.280216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:11:53.286277 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:11:53.286850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:11:53.287043 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:11:53.287761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:11:53.287826 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:11:53.288403 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:11:53.288460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:11:53.289066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:11:53.289136 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:11:53.289862 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:11:53.293548 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:11:53.293643 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:11:53.300554 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:11:53.300738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:11:53.302894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:11:53.303112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:11:53.304494 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:11:53.304546 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:11:53.305438 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:11:53.305510 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:11:53.307651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:11:53.307703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:11:53.308162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:11:53.308225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:11:53.320378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:11:53.321482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:11:53.321906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:11:53.323206 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 10:11:53.323262 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:11:53.323645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:11:53.323691 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:11:53.324040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:11:53.324081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:53.327278 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:11:53.327341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:11:53.327688 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:11:53.327780 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:11:53.329330 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:11:53.329457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:11:53.332403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:11:53.338511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:11:53.346879 systemd[1]: Switching root. Sep 12 10:11:53.391472 systemd-journald[179]: Journal stopped Sep 12 10:11:55.600843 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Sep 12 10:11:55.600935 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:11:55.600959 kernel: SELinux: policy capability open_perms=1 Sep 12 10:11:55.600979 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:11:55.600998 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:11:55.601017 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:11:55.601037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:11:55.601056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:11:55.601076 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:11:55.601100 kernel: audit: type=1403 audit(1757671914.129:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:11:55.601160 systemd[1]: Successfully loaded SELinux policy in 72.903ms. Sep 12 10:11:55.601210 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.002ms. Sep 12 10:11:55.601234 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:11:55.601255 systemd[1]: Detected virtualization amazon. Sep 12 10:11:55.601276 systemd[1]: Detected architecture x86-64. Sep 12 10:11:55.601296 systemd[1]: Detected first boot. Sep 12 10:11:55.601317 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:11:55.601341 zram_generator::config[1370]: No configuration found. Sep 12 10:11:55.601368 kernel: Guest personality initialized and is inactive Sep 12 10:11:55.601390 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:11:55.601411 kernel: Initialized host personality Sep 12 10:11:55.601431 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:11:55.601452 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:11:55.601476 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:11:55.601504 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:11:55.601526 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:11:55.601550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:11:55.601569 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:11:55.601588 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:11:55.601610 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:11:55.601630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:11:55.601652 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:11:55.601674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:11:55.601695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:11:55.601717 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:11:55.601744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:11:55.601767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:11:55.601790 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:11:55.601811 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:11:55.601835 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:11:55.601858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:11:55.601880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:11:55.601908 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:11:55.601930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:11:55.601952 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:11:55.601973 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:11:55.601996 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:11:55.602017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:11:55.602042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:11:55.602061 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:11:55.602080 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:11:55.602100 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:11:55.602183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:11:55.602202 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:11:55.602220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:11:55.602238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:11:55.602256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:11:55.602282 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:11:55.602304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:11:55.602325 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:11:55.602345 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:11:55.602371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:11:55.602391 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:11:55.602411 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:11:55.602429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:11:55.602448 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:11:55.602466 systemd[1]: Reached target machines.target - Containers. Sep 12 10:11:55.602484 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:11:55.602503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:11:55.602525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:11:55.602543 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:11:55.602560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:11:55.602580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:11:55.602600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:11:55.602623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:11:55.602646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:11:55.602670 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:11:55.602695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:11:55.602716 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:11:55.602738 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:11:55.602760 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:11:55.602784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:11:55.602806 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:11:55.602827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:11:55.602848 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:11:55.602867 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:11:55.602890 kernel: loop: module loaded Sep 12 10:11:55.602993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:11:55.603017 kernel: fuse: init (API version 7.39) Sep 12 10:11:55.603039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:11:55.603060 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:11:55.603081 systemd[1]: Stopped verity-setup.service. Sep 12 10:11:55.603102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:11:55.603148 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:11:55.603174 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:11:55.603196 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:11:55.603221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:11:55.603243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:11:55.603265 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:11:55.603286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:11:55.603307 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:11:55.603328 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:11:55.603350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:11:55.603371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:11:55.603393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:11:55.603418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:11:55.603440 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:11:55.603462 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:11:55.603483 kernel: ACPI: bus type drm_connector registered Sep 12 10:11:55.603503 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:11:55.603525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:11:55.603546 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:11:55.603569 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:11:55.603601 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:11:55.603661 systemd-journald[1460]: Collecting audit messages is disabled. Sep 12 10:11:55.603701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:11:55.603723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:11:55.603744 systemd-journald[1460]: Journal started Sep 12 10:11:55.603789 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec2695c16e81e8a3f30433e9f0ec8b8b) is 4.7M, max 38.2M, 33.4M free. Sep 12 10:11:55.191436 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:11:55.199561 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 10:11:55.200082 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:11:55.609287 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:11:55.610403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:11:55.611704 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:11:55.626534 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:11:55.636168 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:11:55.643204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:11:55.643892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:11:55.643936 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:11:55.648481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:11:55.659047 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:11:55.668074 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:11:55.669636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:11:55.675327 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:11:55.689455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:11:55.690519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:11:55.694279 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:11:55.695001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:11:55.701405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:11:55.712334 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:11:55.716335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:11:55.720687 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:11:55.735248 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec2695c16e81e8a3f30433e9f0ec8b8b is 79.694ms for 1014 entries. Sep 12 10:11:55.735248 systemd-journald[1460]: System Journal (/var/log/journal/ec2695c16e81e8a3f30433e9f0ec8b8b) is 8M, max 195.6M, 187.6M free. Sep 12 10:11:55.832314 systemd-journald[1460]: Received client request to flush runtime journal. Sep 12 10:11:55.832378 kernel: loop0: detected capacity change from 0 to 229808 Sep 12 10:11:55.723610 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:11:55.724773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:11:55.726545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:11:55.742298 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:11:55.771803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:11:55.772663 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:11:55.778317 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:11:55.789136 udevadm[1512]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 10:11:55.801437 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Sep 12 10:11:55.801462 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Sep 12 10:11:55.813495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:11:55.826359 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:11:55.827821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:11:55.837898 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:11:55.860208 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:11:55.899438 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:11:55.906173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:11:55.926179 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Sep 12 10:11:55.928257 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Sep 12 10:11:55.936298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:11:55.965192 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:11:56.012412 kernel: loop1: detected capacity change from 0 to 138176 Sep 12 10:11:56.168362 kernel: loop2: detected capacity change from 0 to 147912 Sep 12 10:11:56.201836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:11:56.309576 kernel: loop3: detected capacity change from 0 to 62832 Sep 12 10:11:56.446268 kernel: loop4: detected capacity change from 0 to 229808 Sep 12 10:11:56.496153 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 10:11:56.527167 kernel: loop6: detected capacity change from 0 to 147912 Sep 12 10:11:56.558162 kernel: loop7: detected capacity change from 0 to 62832 Sep 12 10:11:56.578674 (sd-merge)[1533]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 10:11:56.579472 (sd-merge)[1533]: Merged extensions into '/usr'. Sep 12 10:11:56.586240 systemd[1]: Reload requested from client PID 1505 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:11:56.586410 systemd[1]: Reloading... Sep 12 10:11:56.689155 zram_generator::config[1561]: No configuration found. Sep 12 10:11:56.834885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:11:56.926046 systemd[1]: Reloading finished in 337 ms. Sep 12 10:11:56.946712 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:11:56.947683 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:11:56.958610 systemd[1]: Starting ensure-sysext.service... Sep 12 10:11:56.961284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:11:56.965334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:11:56.989767 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:11:56.990035 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:11:56.990838 systemd[1]: Reload requested from client PID 1613 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:11:56.990933 systemd[1]: Reloading... Sep 12 10:11:56.991537 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:11:56.991802 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Sep 12 10:11:56.991864 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Sep 12 10:11:56.999611 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:11:56.999628 systemd-tmpfiles[1614]: Skipping /boot Sep 12 10:11:57.014335 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:11:57.014348 systemd-tmpfiles[1614]: Skipping /boot Sep 12 10:11:57.037589 systemd-udevd[1615]: Using default interface naming scheme 'v255'. Sep 12 10:11:57.063164 zram_generator::config[1641]: No configuration found. Sep 12 10:11:57.247382 (udev-worker)[1686]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:11:57.347150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:11:57.351266 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:11:57.357408 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 12 10:11:57.362165 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 10:11:57.376706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:11:57.385204 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1672) Sep 12 10:11:57.388158 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 10:11:57.516925 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 10:11:57.624689 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:11:57.625106 systemd[1]: Reloading finished in 633 ms. Sep 12 10:11:57.638326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:11:57.641889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:11:57.676176 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:11:57.688601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:11:57.701153 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:11:57.720025 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:11:57.734115 systemd[1]: Finished ensure-sysext.service. Sep 12 10:11:57.749684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 10:11:57.760530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:11:57.770375 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:11:57.775340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:11:57.776752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:11:57.780342 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:11:57.786777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:11:57.795339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:11:57.801705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:11:57.811323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:11:57.812501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:11:57.819304 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:11:57.819938 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:11:57.832827 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:11:57.841462 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:11:57.845425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:11:57.855370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:11:57.857641 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:11:57.871048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:11:57.882037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:11:57.882711 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:11:57.884806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:11:57.885054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:11:57.886082 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:11:57.887912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:11:57.890421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:11:57.890665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:11:57.892106 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:11:57.892351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:11:57.895456 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:11:57.896544 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:11:57.916093 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:11:57.925564 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:11:57.926552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:11:57.927331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:11:57.938580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:11:57.944761 augenrules[1849]: No rules Sep 12 10:11:57.949658 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:11:57.953307 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:11:57.954311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:11:57.958646 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:11:57.962088 lvm[1847]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:11:57.972106 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:11:58.013804 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:11:58.016367 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:11:58.020626 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:11:58.098204 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:11:58.101022 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:11:58.117315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:11:58.139297 systemd-resolved[1834]: Positive Trust Anchors: Sep 12 10:11:58.139313 systemd-resolved[1834]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:11:58.139368 systemd-resolved[1834]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:11:58.151032 systemd-networkd[1833]: lo: Link UP Sep 12 10:11:58.151040 systemd-networkd[1833]: lo: Gained carrier Sep 12 10:11:58.152772 systemd-networkd[1833]: Enumeration completed Sep 12 10:11:58.153028 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:11:58.153226 systemd-networkd[1833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:11:58.153232 systemd-networkd[1833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:11:58.155607 systemd-networkd[1833]: eth0: Link UP Sep 12 10:11:58.155904 systemd-networkd[1833]: eth0: Gained carrier Sep 12 10:11:58.155937 systemd-networkd[1833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:11:58.160080 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:11:58.166774 systemd-resolved[1834]: Defaulting to hostname 'linux'. Sep 12 10:11:58.167285 systemd-networkd[1833]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 10:11:58.167398 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:11:58.171684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:11:58.172492 systemd[1]: Reached target network.target - Network. Sep 12 10:11:58.173078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:11:58.174318 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:11:58.175164 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:11:58.175843 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:11:58.176671 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:11:58.177383 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:11:58.178014 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:11:58.178635 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:11:58.178780 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:11:58.179438 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:11:58.183574 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:11:58.186086 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:11:58.190321 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:11:58.190917 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:11:58.191385 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:11:58.193839 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:11:58.194738 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:11:58.196155 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:11:58.196728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:11:58.197760 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:11:58.198280 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:11:58.198717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:11:58.198760 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:11:58.203262 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:11:58.207078 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 10:11:58.219632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:11:58.221893 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:11:58.226380 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:11:58.228923 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:11:58.236243 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:11:58.252348 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 10:11:58.254579 jq[1883]: false Sep 12 10:11:58.256096 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:11:58.264324 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 10:11:58.268336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:11:58.302360 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:11:58.327469 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:11:58.330032 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:11:58.331813 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:11:58.333674 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:11:58.342281 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:11:58.355301 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:11:58.356353 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:11:58.356897 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:11:58.357189 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:11:58.361064 extend-filesystems[1884]: Found loop4 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found loop5 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found loop6 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found loop7 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p1 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p2 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p3 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found usr Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p4 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p6 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p7 Sep 12 10:11:58.362921 extend-filesystems[1884]: Found nvme0n1p9 Sep 12 10:11:58.362921 extend-filesystems[1884]: Checking size of /dev/nvme0n1p9 Sep 12 10:11:58.366797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:11:58.367862 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:11:58.377816 dbus-daemon[1882]: [system] SELinux support is enabled Sep 12 10:11:58.381603 dbus-daemon[1882]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1833 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 10:11:58.381723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:11:58.391076 jq[1903]: true Sep 12 10:11:58.400871 ntpd[1886]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 08:14:39 UTC 2025 (1): Starting Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 08:14:39 UTC 2025 (1): Starting Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: ---------------------------------------------------- Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: ntp-4 is maintained by Network Time Foundation, Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: corporation. Support and training for ntp-4 are Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: available at https://www.nwtime.org/support Sep 12 10:11:58.402451 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: ---------------------------------------------------- Sep 12 10:11:58.400910 ntpd[1886]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 10:11:58.400920 ntpd[1886]: ---------------------------------------------------- Sep 12 10:11:58.400929 ntpd[1886]: ntp-4 is maintained by Network Time Foundation, Sep 12 10:11:58.400939 ntpd[1886]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 10:11:58.400949 ntpd[1886]: corporation. Support and training for ntp-4 are Sep 12 10:11:58.400960 ntpd[1886]: available at https://www.nwtime.org/support Sep 12 10:11:58.400970 ntpd[1886]: ---------------------------------------------------- Sep 12 10:11:58.427830 ntpd[1886]: proto: precision = 0.099 usec (-23) Sep 12 10:11:58.429271 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: proto: precision = 0.099 usec (-23) Sep 12 10:11:58.439240 coreos-metadata[1881]: Sep 12 10:11:58.439 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 10:11:58.439941 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:11:58.440933 coreos-metadata[1881]: Sep 12 10:11:58.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 10:11:58.441581 dbus-daemon[1882]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 10:11:58.442551 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:11:58.442589 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:11:58.442988 coreos-metadata[1881]: Sep 12 10:11:58.442 INFO Fetch successful Sep 12 10:11:58.442988 coreos-metadata[1881]: Sep 12 10:11:58.442 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 10:11:58.443231 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:11:58.443260 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:11:58.443921 ntpd[1886]: basedate set to 2025-08-31 Sep 12 10:11:58.444486 coreos-metadata[1881]: Sep 12 10:11:58.444 INFO Fetch successful Sep 12 10:11:58.444486 coreos-metadata[1881]: Sep 12 10:11:58.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 10:11:58.444588 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: basedate set to 2025-08-31 Sep 12 10:11:58.444588 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: gps base set to 2025-08-31 (week 2382) Sep 12 10:11:58.443950 ntpd[1886]: gps base set to 2025-08-31 (week 2382) Sep 12 10:11:58.446933 coreos-metadata[1881]: Sep 12 10:11:58.446 INFO Fetch successful Sep 12 10:11:58.446933 coreos-metadata[1881]: Sep 12 10:11:58.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 10:11:58.446933 coreos-metadata[1881]: Sep 12 10:11:58.446 INFO Fetch successful Sep 12 10:11:58.446933 coreos-metadata[1881]: Sep 12 10:11:58.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 10:11:58.453933 coreos-metadata[1881]: Sep 12 10:11:58.451 INFO Fetch failed with 404: resource not found Sep 12 10:11:58.453933 coreos-metadata[1881]: Sep 12 10:11:58.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 10:11:58.453933 coreos-metadata[1881]: Sep 12 10:11:58.453 INFO Fetch successful Sep 12 10:11:58.453933 coreos-metadata[1881]: Sep 12 10:11:58.453 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 10:11:58.454249 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 10:11:58.454249 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 10:11:58.451256 ntpd[1886]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 10:11:58.449663 (ntainerd)[1910]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:11:58.451311 ntpd[1886]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 10:11:58.454411 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 10:11:58.459219 ntpd[1886]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 10:11:58.466477 coreos-metadata[1881]: Sep 12 10:11:58.463 INFO Fetch successful Sep 12 10:11:58.466477 coreos-metadata[1881]: Sep 12 10:11:58.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 10:11:58.466477 coreos-metadata[1881]: Sep 12 10:11:58.465 INFO Fetch successful Sep 12 10:11:58.466477 coreos-metadata[1881]: Sep 12 10:11:58.465 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listen normally on 3 eth0 172.31.20.240:123 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listen normally on 4 lo [::1]:123 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: bind(21) AF_INET6 fe80::421:78ff:feb6:4421%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: unable to create socket on eth0 (5) for fe80::421:78ff:feb6:4421%2#123 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: failed to init interface for address fe80::421:78ff:feb6:4421%2 Sep 12 10:11:58.466645 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: Listening on routing socket on fd #21 for interface updates Sep 12 10:11:58.459278 ntpd[1886]: Listen normally on 3 eth0 172.31.20.240:123 Sep 12 10:11:58.459322 ntpd[1886]: Listen normally on 4 lo [::1]:123 Sep 12 10:11:58.459375 ntpd[1886]: bind(21) AF_INET6 fe80::421:78ff:feb6:4421%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:11:58.459395 ntpd[1886]: unable to create socket on eth0 (5) for fe80::421:78ff:feb6:4421%2#123 Sep 12 10:11:58.459411 ntpd[1886]: failed to init interface for address fe80::421:78ff:feb6:4421%2 Sep 12 10:11:58.459449 ntpd[1886]: Listening on routing socket on fd #21 for interface updates Sep 12 10:11:58.475213 coreos-metadata[1881]: Sep 12 10:11:58.467 INFO Fetch successful Sep 12 10:11:58.475213 coreos-metadata[1881]: Sep 12 10:11:58.467 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 10:11:58.475213 coreos-metadata[1881]: Sep 12 10:11:58.473 INFO Fetch successful Sep 12 10:11:58.475444 update_engine[1902]: I20250912 10:11:58.470960 1902 main.cc:92] Flatcar Update Engine starting Sep 12 10:11:58.480976 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:11:58.483443 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:11:58.483443 ntpd[1886]: 12 Sep 10:11:58 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:11:58.481023 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:11:58.493172 update_engine[1902]: I20250912 10:11:58.490490 1902 update_check_scheduler.cc:74] Next update check in 11m7s Sep 12 10:11:58.493622 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:11:58.505350 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:11:58.514900 jq[1915]: true Sep 12 10:11:58.517052 tar[1907]: linux-amd64/LICENSE Sep 12 10:11:58.517052 tar[1907]: linux-amd64/helm Sep 12 10:11:58.520246 extend-filesystems[1884]: Resized partition /dev/nvme0n1p9 Sep 12 10:11:58.526763 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 10:11:58.535870 extend-filesystems[1939]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:11:58.554190 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 10:11:58.610543 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 10:11:58.611380 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:11:58.665146 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1690) Sep 12 10:11:58.757179 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 10:11:58.795724 extend-filesystems[1939]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 10:11:58.795724 extend-filesystems[1939]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 10:11:58.795724 extend-filesystems[1939]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 10:11:58.815094 extend-filesystems[1884]: Resized filesystem in /dev/nvme0n1p9 Sep 12 10:11:58.826573 bash[1978]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:11:58.797562 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:11:58.797847 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:11:58.805210 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:11:58.814838 systemd-logind[1901]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:11:58.814862 systemd-logind[1901]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 10:11:58.814899 systemd-logind[1901]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:11:58.815358 systemd[1]: Starting sshkeys.service... Sep 12 10:11:58.821260 systemd-logind[1901]: New seat seat0. Sep 12 10:11:58.831639 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:11:58.874526 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 10:11:58.882639 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 10:11:58.927437 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 10:11:58.935331 dbus-daemon[1882]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 10:11:58.942650 dbus-daemon[1882]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1927 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 10:11:58.954586 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 10:11:59.042647 locksmithd[1934]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:11:59.054490 polkitd[2038]: Started polkitd version 121 Sep 12 10:11:59.085093 coreos-metadata[2021]: Sep 12 10:11:59.084 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 10:11:59.089640 coreos-metadata[2021]: Sep 12 10:11:59.089 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 10:11:59.091974 coreos-metadata[2021]: Sep 12 10:11:59.091 INFO Fetch successful Sep 12 10:11:59.092088 coreos-metadata[2021]: Sep 12 10:11:59.092 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 10:11:59.096700 coreos-metadata[2021]: Sep 12 10:11:59.096 INFO Fetch successful Sep 12 10:11:59.100649 unknown[2021]: wrote ssh authorized keys file for user: core Sep 12 10:11:59.105804 polkitd[2038]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 10:11:59.107491 polkitd[2038]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 10:11:59.108897 polkitd[2038]: Finished loading, compiling and executing 2 rules Sep 12 10:11:59.110771 dbus-daemon[1882]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 10:11:59.111630 polkitd[2038]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 10:11:59.112813 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 10:11:59.169089 update-ssh-keys[2069]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:11:59.170468 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 10:11:59.174336 systemd[1]: Finished sshkeys.service. Sep 12 10:11:59.187591 systemd-hostnamed[1927]: Hostname set to (transient) Sep 12 10:11:59.187716 systemd-resolved[1834]: System hostname changed to 'ip-172-31-20-240'. Sep 12 10:11:59.272823 sshd_keygen[1926]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:11:59.314530 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:11:59.323472 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:11:59.333920 systemd[1]: Started sshd@0-172.31.20.240:22-147.75.109.163:42616.service - OpenSSH per-connection server daemon (147.75.109.163:42616). Sep 12 10:11:59.367638 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:11:59.367959 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:11:59.376600 containerd[1910]: time="2025-09-12T10:11:59.376498712Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:11:59.378506 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:11:59.402148 ntpd[1886]: bind(24) AF_INET6 fe80::421:78ff:feb6:4421%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:11:59.402932 ntpd[1886]: 12 Sep 10:11:59 ntpd[1886]: bind(24) AF_INET6 fe80::421:78ff:feb6:4421%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:11:59.402932 ntpd[1886]: 12 Sep 10:11:59 ntpd[1886]: unable to create socket on eth0 (6) for fe80::421:78ff:feb6:4421%2#123 Sep 12 10:11:59.402932 ntpd[1886]: 12 Sep 10:11:59 ntpd[1886]: failed to init interface for address fe80::421:78ff:feb6:4421%2 Sep 12 10:11:59.402190 ntpd[1886]: unable to create socket on eth0 (6) for fe80::421:78ff:feb6:4421%2#123 Sep 12 10:11:59.402207 ntpd[1886]: failed to init interface for address fe80::421:78ff:feb6:4421%2 Sep 12 10:11:59.411303 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:11:59.422906 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:11:59.435644 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:11:59.437288 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:11:59.468330 containerd[1910]: time="2025-09-12T10:11:59.468254629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.470582 containerd[1910]: time="2025-09-12T10:11:59.470535014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.470702002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.470732890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.470928309Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.470953399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.471030407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471164 containerd[1910]: time="2025-09-12T10:11:59.471048917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471641 containerd[1910]: time="2025-09-12T10:11:59.471616353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471718 containerd[1910]: time="2025-09-12T10:11:59.471703137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471789 containerd[1910]: time="2025-09-12T10:11:59.471776524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:11:59.471890 containerd[1910]: time="2025-09-12T10:11:59.471842040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472023628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472307575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472529201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472550187Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472641050Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:11:59.472725 containerd[1910]: time="2025-09-12T10:11:59.472699872Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:11:59.478549 containerd[1910]: time="2025-09-12T10:11:59.478498119Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:11:59.478664 containerd[1910]: time="2025-09-12T10:11:59.478574399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:11:59.478664 containerd[1910]: time="2025-09-12T10:11:59.478598295Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:11:59.478664 containerd[1910]: time="2025-09-12T10:11:59.478619936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:11:59.478664 containerd[1910]: time="2025-09-12T10:11:59.478639598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:11:59.478851 containerd[1910]: time="2025-09-12T10:11:59.478829816Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:11:59.479209 containerd[1910]: time="2025-09-12T10:11:59.479185200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:11:59.479357 containerd[1910]: time="2025-09-12T10:11:59.479334654Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:11:59.479467 containerd[1910]: time="2025-09-12T10:11:59.479364505Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:11:59.479467 containerd[1910]: time="2025-09-12T10:11:59.479387533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:11:59.479467 containerd[1910]: time="2025-09-12T10:11:59.479408055Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479467 containerd[1910]: time="2025-09-12T10:11:59.479430663Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479467 containerd[1910]: time="2025-09-12T10:11:59.479449430Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479470544Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479496232Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479515063Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479534603Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479551996Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479581252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479602124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479635 containerd[1910]: time="2025-09-12T10:11:59.479625675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479646136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479665033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479685013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479709788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479730903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479751782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479774480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479792722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479810459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479829598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479850661Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479881423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.479913 containerd[1910]: time="2025-09-12T10:11:59.479901691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.479919503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.479973181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.479997852Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480014610Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480033237Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480048728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480066231Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480081563Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:11:59.481019 containerd[1910]: time="2025-09-12T10:11:59.480097313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:11:59.481369 containerd[1910]: time="2025-09-12T10:11:59.480637573Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:11:59.481369 containerd[1910]: time="2025-09-12T10:11:59.480708221Z" level=info msg="Connect containerd service" Sep 12 10:11:59.481369 containerd[1910]: time="2025-09-12T10:11:59.480757962Z" level=info msg="using legacy CRI server" Sep 12 10:11:59.481369 containerd[1910]: time="2025-09-12T10:11:59.480768241Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:11:59.481369 containerd[1910]: time="2025-09-12T10:11:59.480948511Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.481825145Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.481958696Z" level=info msg="Start subscribing containerd event" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482012729Z" level=info msg="Start recovering state" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482081810Z" level=info msg="Start event monitor" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482096125Z" level=info msg="Start snapshots syncer" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482107574Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482118432Z" level=info msg="Start streaming server" Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482603128Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482655400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:11:59.483745 containerd[1910]: time="2025-09-12T10:11:59.482718431Z" level=info msg="containerd successfully booted in 0.109983s" Sep 12 10:11:59.482825 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:11:59.618687 sshd[2092]: Accepted publickey for core from 147.75.109.163 port 42616 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:11:59.621522 sshd-session[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:59.630572 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:11:59.635653 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:11:59.652013 systemd-logind[1901]: New session 1 of user core. Sep 12 10:11:59.671777 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:11:59.683665 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:11:59.697367 (systemd)[2107]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:11:59.702246 systemd-logind[1901]: New session c1 of user core. Sep 12 10:11:59.781859 tar[1907]: linux-amd64/README.md Sep 12 10:11:59.798599 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:11:59.884074 systemd[2107]: Queued start job for default target default.target. Sep 12 10:11:59.891555 systemd[2107]: Created slice app.slice - User Application Slice. Sep 12 10:11:59.891599 systemd[2107]: Reached target paths.target - Paths. Sep 12 10:11:59.891659 systemd[2107]: Reached target timers.target - Timers. Sep 12 10:11:59.893176 systemd[2107]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:11:59.905888 systemd[2107]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:11:59.906046 systemd[2107]: Reached target sockets.target - Sockets. Sep 12 10:11:59.906112 systemd[2107]: Reached target basic.target - Basic System. Sep 12 10:11:59.906190 systemd[2107]: Reached target default.target - Main User Target. Sep 12 10:11:59.906234 systemd[2107]: Startup finished in 192ms. Sep 12 10:11:59.906330 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:11:59.915365 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:11:59.944313 systemd-networkd[1833]: eth0: Gained IPv6LL Sep 12 10:11:59.947484 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:11:59.948659 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:11:59.954496 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 10:11:59.958432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:11:59.963239 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:12:00.019053 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:12:00.032895 amazon-ssm-agent[2120]: Initializing new seelog logger Sep 12 10:12:00.032895 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete Sep 12 10:12:00.033333 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.033333 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.033809 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 processing appconfig overrides Sep 12 10:12:00.033953 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.033953 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.034060 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 processing appconfig overrides Sep 12 10:12:00.034449 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.034449 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.034540 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 processing appconfig overrides Sep 12 10:12:00.035161 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO Proxy environment variables: Sep 12 10:12:00.039148 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.039148 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 10:12:00.039148 amazon-ssm-agent[2120]: 2025/09/12 10:12:00 processing appconfig overrides Sep 12 10:12:00.075284 systemd[1]: Started sshd@1-172.31.20.240:22-147.75.109.163:42618.service - OpenSSH per-connection server daemon (147.75.109.163:42618). Sep 12 10:12:00.135585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO https_proxy: Sep 12 10:12:00.233991 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO http_proxy: Sep 12 10:12:00.262385 sshd[2140]: Accepted publickey for core from 147.75.109.163 port 42618 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:00.264769 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:00.274076 systemd-logind[1901]: New session 2 of user core. Sep 12 10:12:00.277366 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:12:00.333574 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO no_proxy: Sep 12 10:12:00.407711 sshd[2142]: Connection closed by 147.75.109.163 port 42618 Sep 12 10:12:00.408670 sshd-session[2140]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:00.435182 systemd[1]: sshd@1-172.31.20.240:22-147.75.109.163:42618.service: Deactivated successfully. Sep 12 10:12:00.436284 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO Checking if agent identity type OnPrem can be assumed Sep 12 10:12:00.444132 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:12:00.495208 systemd-logind[1901]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:12:00.512792 systemd[1]: Started sshd@2-172.31.20.240:22-147.75.109.163:42626.service - OpenSSH per-connection server daemon (147.75.109.163:42626). Sep 12 10:12:00.522332 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO Checking if agent identity type EC2 can be assumed Sep 12 10:12:00.522318 systemd-logind[1901]: Removed session 2. Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO Agent will take identity from EC2 Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [Registrar] Starting registrar module Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [EC2Identity] EC2 registration was successful. Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [CredentialRefresher] credentialRefresher has started Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 10:12:00.522585 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 10:12:00.539451 amazon-ssm-agent[2120]: 2025-09-12 10:12:00 INFO [CredentialRefresher] Next credential rotation will be in 30.45830582325 minutes Sep 12 10:12:00.731327 sshd[2148]: Accepted publickey for core from 147.75.109.163 port 42626 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:00.735856 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:00.746363 systemd-logind[1901]: New session 3 of user core. Sep 12 10:12:00.754389 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:12:00.885386 sshd[2151]: Connection closed by 147.75.109.163 port 42626 Sep 12 10:12:00.886104 sshd-session[2148]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:00.895000 systemd[1]: sshd@2-172.31.20.240:22-147.75.109.163:42626.service: Deactivated successfully. Sep 12 10:12:00.900563 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:12:00.903492 systemd-logind[1901]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:12:00.904787 systemd-logind[1901]: Removed session 3. Sep 12 10:12:01.630978 amazon-ssm-agent[2120]: 2025-09-12 10:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 10:12:01.748231 amazon-ssm-agent[2120]: 2025-09-12 10:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2157) started Sep 12 10:12:01.852739 amazon-ssm-agent[2120]: 2025-09-12 10:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 10:12:02.401388 ntpd[1886]: Listen normally on 7 eth0 [fe80::421:78ff:feb6:4421%2]:123 Sep 12 10:12:02.401973 ntpd[1886]: 12 Sep 10:12:02 ntpd[1886]: Listen normally on 7 eth0 [fe80::421:78ff:feb6:4421%2]:123 Sep 12 10:12:02.735400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:02.736816 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:12:02.739228 systemd[1]: Startup finished in 602ms (kernel) + 10.402s (initrd) + 8.680s (userspace) = 19.685s. Sep 12 10:12:02.742859 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:12:04.087388 kubelet[2172]: E0912 10:12:04.087346 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:12:04.090065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:12:04.090230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:12:04.090538 systemd[1]: kubelet.service: Consumed 1.176s CPU time, 270.4M memory peak. Sep 12 10:12:06.385838 systemd-resolved[1834]: Clock change detected. Flushing caches. Sep 12 10:12:11.905779 systemd[1]: Started sshd@3-172.31.20.240:22-147.75.109.163:46470.service - OpenSSH per-connection server daemon (147.75.109.163:46470). Sep 12 10:12:12.064236 sshd[2184]: Accepted publickey for core from 147.75.109.163 port 46470 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:12.065618 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:12.071353 systemd-logind[1901]: New session 4 of user core. Sep 12 10:12:12.080747 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:12:12.198211 sshd[2186]: Connection closed by 147.75.109.163 port 46470 Sep 12 10:12:12.199037 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:12.201835 systemd[1]: sshd@3-172.31.20.240:22-147.75.109.163:46470.service: Deactivated successfully. Sep 12 10:12:12.203725 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:12:12.204979 systemd-logind[1901]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:12:12.205826 systemd-logind[1901]: Removed session 4. Sep 12 10:12:12.247343 systemd[1]: Started sshd@4-172.31.20.240:22-147.75.109.163:46480.service - OpenSSH per-connection server daemon (147.75.109.163:46480). Sep 12 10:12:12.405753 sshd[2192]: Accepted publickey for core from 147.75.109.163 port 46480 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:12.407221 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:12.411846 systemd-logind[1901]: New session 5 of user core. Sep 12 10:12:12.417692 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:12:12.535435 sshd[2194]: Connection closed by 147.75.109.163 port 46480 Sep 12 10:12:12.536552 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:12.540039 systemd[1]: sshd@4-172.31.20.240:22-147.75.109.163:46480.service: Deactivated successfully. Sep 12 10:12:12.542099 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:12:12.543753 systemd-logind[1901]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:12:12.545159 systemd-logind[1901]: Removed session 5. Sep 12 10:12:12.568787 systemd[1]: Started sshd@5-172.31.20.240:22-147.75.109.163:46484.service - OpenSSH per-connection server daemon (147.75.109.163:46484). Sep 12 10:12:12.727249 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 46484 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:12.728673 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:12.733358 systemd-logind[1901]: New session 6 of user core. Sep 12 10:12:12.739732 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:12:12.858326 sshd[2202]: Connection closed by 147.75.109.163 port 46484 Sep 12 10:12:12.859209 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:12.862952 systemd[1]: sshd@5-172.31.20.240:22-147.75.109.163:46484.service: Deactivated successfully. Sep 12 10:12:12.865208 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:12:12.866831 systemd-logind[1901]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:12:12.868002 systemd-logind[1901]: Removed session 6. Sep 12 10:12:12.895847 systemd[1]: Started sshd@6-172.31.20.240:22-147.75.109.163:46488.service - OpenSSH per-connection server daemon (147.75.109.163:46488). Sep 12 10:12:13.055874 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 46488 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:13.057266 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:13.062319 systemd-logind[1901]: New session 7 of user core. Sep 12 10:12:13.075705 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:12:13.220867 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:12:13.221177 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:12:13.237122 sudo[2211]: pam_unix(sudo:session): session closed for user root Sep 12 10:12:13.259515 sshd[2210]: Connection closed by 147.75.109.163 port 46488 Sep 12 10:12:13.260277 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:13.263568 systemd[1]: sshd@6-172.31.20.240:22-147.75.109.163:46488.service: Deactivated successfully. Sep 12 10:12:13.265334 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:12:13.266848 systemd-logind[1901]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:12:13.268079 systemd-logind[1901]: Removed session 7. Sep 12 10:12:13.295734 systemd[1]: Started sshd@7-172.31.20.240:22-147.75.109.163:46504.service - OpenSSH per-connection server daemon (147.75.109.163:46504). Sep 12 10:12:13.455286 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 46504 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:13.456717 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:13.461504 systemd-logind[1901]: New session 8 of user core. Sep 12 10:12:13.468684 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:12:13.567109 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:12:13.567549 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:12:13.571786 sudo[2221]: pam_unix(sudo:session): session closed for user root Sep 12 10:12:13.577439 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:12:13.577914 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:12:13.598351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:12:13.626223 augenrules[2243]: No rules Sep 12 10:12:13.627565 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:12:13.627784 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:12:13.629835 sudo[2220]: pam_unix(sudo:session): session closed for user root Sep 12 10:12:13.651988 sshd[2219]: Connection closed by 147.75.109.163 port 46504 Sep 12 10:12:13.652673 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:13.656099 systemd[1]: sshd@7-172.31.20.240:22-147.75.109.163:46504.service: Deactivated successfully. Sep 12 10:12:13.658108 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:12:13.659974 systemd-logind[1901]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:12:13.661070 systemd-logind[1901]: Removed session 8. Sep 12 10:12:13.693876 systemd[1]: Started sshd@8-172.31.20.240:22-147.75.109.163:46508.service - OpenSSH per-connection server daemon (147.75.109.163:46508). Sep 12 10:12:13.852973 sshd[2252]: Accepted publickey for core from 147.75.109.163 port 46508 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:12:13.854311 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:13.858747 systemd-logind[1901]: New session 9 of user core. Sep 12 10:12:13.872682 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:12:13.970576 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:12:13.970873 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:12:14.692913 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:12:14.693076 (dockerd)[2272]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:12:15.092018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:12:15.098679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:15.395084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:15.399906 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:12:15.447890 kubelet[2285]: E0912 10:12:15.447782 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:12:15.451744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:12:15.451880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:12:15.452440 systemd[1]: kubelet.service: Consumed 160ms CPU time, 110.4M memory peak. Sep 12 10:12:15.486630 dockerd[2272]: time="2025-09-12T10:12:15.486567629Z" level=info msg="Starting up" Sep 12 10:12:15.666744 dockerd[2272]: time="2025-09-12T10:12:15.666417774Z" level=info msg="Loading containers: start." Sep 12 10:12:15.851487 kernel: Initializing XFRM netlink socket Sep 12 10:12:15.904585 (udev-worker)[2309]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:12:15.973106 systemd-networkd[1833]: docker0: Link UP Sep 12 10:12:16.012329 dockerd[2272]: time="2025-09-12T10:12:16.012279505Z" level=info msg="Loading containers: done." Sep 12 10:12:16.033686 dockerd[2272]: time="2025-09-12T10:12:16.033230273Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:12:16.033686 dockerd[2272]: time="2025-09-12T10:12:16.033337894Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:12:16.033686 dockerd[2272]: time="2025-09-12T10:12:16.033496017Z" level=info msg="Daemon has completed initialization" Sep 12 10:12:16.075667 dockerd[2272]: time="2025-09-12T10:12:16.075601027Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:12:16.075947 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:12:17.204829 containerd[1910]: time="2025-09-12T10:12:17.204790571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 10:12:17.775159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319515694.mount: Deactivated successfully. Sep 12 10:12:19.370143 containerd[1910]: time="2025-09-12T10:12:19.370089995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:19.371493 containerd[1910]: time="2025-09-12T10:12:19.371350751Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 10:12:19.373068 containerd[1910]: time="2025-09-12T10:12:19.372961910Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:19.375867 containerd[1910]: time="2025-09-12T10:12:19.375825830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:19.377378 containerd[1910]: time="2025-09-12T10:12:19.377149123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.172317818s" Sep 12 10:12:19.377378 containerd[1910]: time="2025-09-12T10:12:19.377197345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 10:12:19.377821 containerd[1910]: time="2025-09-12T10:12:19.377792381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 10:12:21.225273 containerd[1910]: time="2025-09-12T10:12:21.225202558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:21.226641 containerd[1910]: time="2025-09-12T10:12:21.226438385Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 10:12:21.227932 containerd[1910]: time="2025-09-12T10:12:21.227872012Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:21.233476 containerd[1910]: time="2025-09-12T10:12:21.231757071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:21.235303 containerd[1910]: time="2025-09-12T10:12:21.235260128Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.857427188s" Sep 12 10:12:21.235733 containerd[1910]: time="2025-09-12T10:12:21.235702584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 10:12:21.236592 containerd[1910]: time="2025-09-12T10:12:21.236561436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 10:12:22.721597 containerd[1910]: time="2025-09-12T10:12:22.721545013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:22.723347 containerd[1910]: time="2025-09-12T10:12:22.723282862Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 10:12:22.725372 containerd[1910]: time="2025-09-12T10:12:22.725322554Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:22.728901 containerd[1910]: time="2025-09-12T10:12:22.728827893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:22.730256 containerd[1910]: time="2025-09-12T10:12:22.730100189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.493503624s" Sep 12 10:12:22.730256 containerd[1910]: time="2025-09-12T10:12:22.730146471Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 10:12:22.731279 containerd[1910]: time="2025-09-12T10:12:22.731249682Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 10:12:23.908621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387843032.mount: Deactivated successfully. Sep 12 10:12:24.529046 containerd[1910]: time="2025-09-12T10:12:24.528975280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:24.532006 containerd[1910]: time="2025-09-12T10:12:24.531834340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 10:12:24.535683 containerd[1910]: time="2025-09-12T10:12:24.535638084Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:24.539943 containerd[1910]: time="2025-09-12T10:12:24.539884130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:24.540886 containerd[1910]: time="2025-09-12T10:12:24.540511911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.809226422s" Sep 12 10:12:24.540886 containerd[1910]: time="2025-09-12T10:12:24.540571232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 10:12:24.541285 containerd[1910]: time="2025-09-12T10:12:24.541253885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 10:12:25.066338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540018877.mount: Deactivated successfully. Sep 12 10:12:25.592042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 10:12:25.599688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:25.909865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:25.916378 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:12:26.034750 kubelet[2607]: E0912 10:12:26.034636 2607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:12:26.037987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:12:26.038188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:12:26.038822 systemd[1]: kubelet.service: Consumed 192ms CPU time, 107M memory peak. Sep 12 10:12:26.851784 containerd[1910]: time="2025-09-12T10:12:26.851720868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:26.854861 containerd[1910]: time="2025-09-12T10:12:26.854620755Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 10:12:26.858726 containerd[1910]: time="2025-09-12T10:12:26.858265266Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:26.864987 containerd[1910]: time="2025-09-12T10:12:26.864943177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:26.866118 containerd[1910]: time="2025-09-12T10:12:26.866085752Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.324798032s" Sep 12 10:12:26.866230 containerd[1910]: time="2025-09-12T10:12:26.866216098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 10:12:26.866754 containerd[1910]: time="2025-09-12T10:12:26.866724214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:12:27.305987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182632623.mount: Deactivated successfully. Sep 12 10:12:27.316041 containerd[1910]: time="2025-09-12T10:12:27.315996358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:27.317375 containerd[1910]: time="2025-09-12T10:12:27.317311606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 10:12:27.319987 containerd[1910]: time="2025-09-12T10:12:27.318780692Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:27.322248 containerd[1910]: time="2025-09-12T10:12:27.321212751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:27.322248 containerd[1910]: time="2025-09-12T10:12:27.322058206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.300175ms" Sep 12 10:12:27.322248 containerd[1910]: time="2025-09-12T10:12:27.322090674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:12:27.322911 containerd[1910]: time="2025-09-12T10:12:27.322875928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 10:12:28.022141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349376590.mount: Deactivated successfully. Sep 12 10:12:30.179022 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 10:12:30.251433 containerd[1910]: time="2025-09-12T10:12:30.251369734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:30.254268 containerd[1910]: time="2025-09-12T10:12:30.254044115Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 10:12:30.255705 containerd[1910]: time="2025-09-12T10:12:30.255652268Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:30.259319 containerd[1910]: time="2025-09-12T10:12:30.259251343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:30.260755 containerd[1910]: time="2025-09-12T10:12:30.260560101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.937648696s" Sep 12 10:12:30.260755 containerd[1910]: time="2025-09-12T10:12:30.260606713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 10:12:34.333102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:34.333368 systemd[1]: kubelet.service: Consumed 192ms CPU time, 107M memory peak. Sep 12 10:12:34.343846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:34.381615 systemd[1]: Reload requested from client PID 2706 ('systemctl') (unit session-9.scope)... Sep 12 10:12:34.381633 systemd[1]: Reloading... Sep 12 10:12:34.508481 zram_generator::config[2754]: No configuration found. Sep 12 10:12:34.665600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:12:34.807601 systemd[1]: Reloading finished in 425 ms. Sep 12 10:12:34.862595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:34.869424 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:34.871433 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:12:34.871714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:34.871773 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98M memory peak. Sep 12 10:12:34.877836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:35.322654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:35.323750 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:12:35.364996 kubelet[2816]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:12:35.364996 kubelet[2816]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:12:35.364996 kubelet[2816]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:12:35.370999 kubelet[2816]: I0912 10:12:35.370802 2816 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:12:36.172487 kubelet[2816]: I0912 10:12:36.171035 2816 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:12:36.172487 kubelet[2816]: I0912 10:12:36.171073 2816 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:12:36.172487 kubelet[2816]: I0912 10:12:36.171597 2816 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:12:36.231796 kubelet[2816]: I0912 10:12:36.231762 2816 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:12:36.237735 kubelet[2816]: E0912 10:12:36.237681 2816 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:12:36.266032 kubelet[2816]: E0912 10:12:36.265991 2816 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:12:36.266032 kubelet[2816]: I0912 10:12:36.266028 2816 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:12:36.275210 kubelet[2816]: I0912 10:12:36.275154 2816 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:12:36.284066 kubelet[2816]: I0912 10:12:36.283990 2816 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:12:36.289280 kubelet[2816]: I0912 10:12:36.284057 2816 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:12:36.291908 kubelet[2816]: I0912 10:12:36.291873 2816 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:12:36.291908 kubelet[2816]: I0912 10:12:36.291909 2816 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:12:36.293399 kubelet[2816]: I0912 10:12:36.293364 2816 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:12:36.297535 kubelet[2816]: I0912 10:12:36.297419 2816 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:12:36.297535 kubelet[2816]: I0912 10:12:36.297450 2816 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:12:36.298560 kubelet[2816]: I0912 10:12:36.298493 2816 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:12:36.301714 kubelet[2816]: I0912 10:12:36.301683 2816 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:12:36.313785 kubelet[2816]: E0912 10:12:36.313661 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-240&limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:12:36.320344 kubelet[2816]: I0912 10:12:36.319942 2816 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:12:36.320630 kubelet[2816]: I0912 10:12:36.320605 2816 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:12:36.322384 kubelet[2816]: E0912 10:12:36.322205 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:12:36.322496 kubelet[2816]: W0912 10:12:36.322411 2816 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:12:36.327936 kubelet[2816]: I0912 10:12:36.327888 2816 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:12:36.328560 kubelet[2816]: I0912 10:12:36.328122 2816 server.go:1289] "Started kubelet" Sep 12 10:12:36.332099 kubelet[2816]: I0912 10:12:36.331374 2816 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:12:36.341815 kubelet[2816]: E0912 10:12:36.337233 2816 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.240:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.240:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-240.18648154ec979a5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-240,UID:ip-172-31-20-240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-240,},FirstTimestamp:2025-09-12 10:12:36.327930463 +0000 UTC m=+1.000432118,LastTimestamp:2025-09-12 10:12:36.327930463 +0000 UTC m=+1.000432118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-240,}" Sep 12 10:12:36.343640 kubelet[2816]: I0912 10:12:36.343496 2816 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:12:36.345629 kubelet[2816]: I0912 10:12:36.345394 2816 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:12:36.345842 kubelet[2816]: I0912 10:12:36.345802 2816 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:12:36.351492 kubelet[2816]: I0912 10:12:36.349646 2816 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:12:36.351492 kubelet[2816]: I0912 10:12:36.349685 2816 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:12:36.351492 kubelet[2816]: E0912 10:12:36.349945 2816 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-240\" not found" Sep 12 10:12:36.353961 kubelet[2816]: I0912 10:12:36.353817 2816 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:12:36.353961 kubelet[2816]: I0912 10:12:36.353817 2816 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:12:36.361018 kubelet[2816]: I0912 10:12:36.360201 2816 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:12:36.361018 kubelet[2816]: I0912 10:12:36.360276 2816 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:12:36.361999 kubelet[2816]: E0912 10:12:36.361972 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": dial tcp 172.31.20.240:6443: connect: connection refused" interval="200ms" Sep 12 10:12:36.363623 kubelet[2816]: I0912 10:12:36.363606 2816 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:12:36.363799 kubelet[2816]: I0912 10:12:36.363783 2816 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:12:36.366256 kubelet[2816]: E0912 10:12:36.366206 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:12:36.367536 kubelet[2816]: E0912 10:12:36.367361 2816 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:12:36.368040 kubelet[2816]: I0912 10:12:36.368023 2816 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:12:36.378651 kubelet[2816]: I0912 10:12:36.378624 2816 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:12:36.379130 kubelet[2816]: I0912 10:12:36.378775 2816 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:12:36.379130 kubelet[2816]: I0912 10:12:36.378806 2816 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:12:36.379130 kubelet[2816]: I0912 10:12:36.378813 2816 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:12:36.379130 kubelet[2816]: E0912 10:12:36.378913 2816 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:12:36.383188 kubelet[2816]: E0912 10:12:36.383163 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:12:36.402820 kubelet[2816]: I0912 10:12:36.402749 2816 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:12:36.403129 kubelet[2816]: I0912 10:12:36.403068 2816 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:12:36.403129 kubelet[2816]: I0912 10:12:36.403090 2816 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:12:36.407494 kubelet[2816]: I0912 10:12:36.407295 2816 policy_none.go:49] "None policy: Start" Sep 12 10:12:36.407494 kubelet[2816]: I0912 10:12:36.407325 2816 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:12:36.407494 kubelet[2816]: I0912 10:12:36.407336 2816 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:12:36.419545 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:12:36.432113 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:12:36.437251 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:12:36.441247 kubelet[2816]: E0912 10:12:36.441225 2816 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:12:36.441901 kubelet[2816]: I0912 10:12:36.441727 2816 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:12:36.441901 kubelet[2816]: I0912 10:12:36.441753 2816 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:12:36.442135 kubelet[2816]: I0912 10:12:36.442046 2816 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:12:36.443331 kubelet[2816]: E0912 10:12:36.443177 2816 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:12:36.443331 kubelet[2816]: E0912 10:12:36.443224 2816 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-240\" not found" Sep 12 10:12:36.494054 systemd[1]: Created slice kubepods-burstable-podae6c99f83eabe3e9cbdedf0c242e6cf9.slice - libcontainer container kubepods-burstable-podae6c99f83eabe3e9cbdedf0c242e6cf9.slice. Sep 12 10:12:36.503785 kubelet[2816]: E0912 10:12:36.503750 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:36.507496 systemd[1]: Created slice kubepods-burstable-podf93b2699fa2350ed0747db26657c68d8.slice - libcontainer container kubepods-burstable-podf93b2699fa2350ed0747db26657c68d8.slice. Sep 12 10:12:36.516121 kubelet[2816]: E0912 10:12:36.515933 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:36.519100 systemd[1]: Created slice kubepods-burstable-pod48e9657f5af54b5ab68f67891b544b8c.slice - libcontainer container kubepods-burstable-pod48e9657f5af54b5ab68f67891b544b8c.slice. Sep 12 10:12:36.520849 kubelet[2816]: E0912 10:12:36.520821 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:36.543472 kubelet[2816]: I0912 10:12:36.543434 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:36.543785 kubelet[2816]: E0912 10:12:36.543759 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.240:6443/api/v1/nodes\": dial tcp 172.31.20.240:6443: connect: connection refused" node="ip-172-31-20-240" Sep 12 10:12:36.564162 kubelet[2816]: E0912 10:12:36.564125 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": dial tcp 172.31.20.240:6443: connect: connection refused" interval="400ms" Sep 12 10:12:36.661829 kubelet[2816]: I0912 10:12:36.661764 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:36.661829 kubelet[2816]: I0912 10:12:36.661813 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:36.661829 kubelet[2816]: I0912 10:12:36.661833 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:36.662125 kubelet[2816]: I0912 10:12:36.661849 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:36.662125 kubelet[2816]: I0912 10:12:36.661870 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48e9657f5af54b5ab68f67891b544b8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-240\" (UID: \"48e9657f5af54b5ab68f67891b544b8c\") " pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:36.662125 kubelet[2816]: I0912 10:12:36.661884 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-ca-certs\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:36.662125 kubelet[2816]: I0912 10:12:36.661898 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:36.662125 kubelet[2816]: I0912 10:12:36.661914 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:36.662255 kubelet[2816]: I0912 10:12:36.661931 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:36.746310 kubelet[2816]: I0912 10:12:36.745930 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:36.746310 kubelet[2816]: E0912 10:12:36.746213 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.240:6443/api/v1/nodes\": dial tcp 172.31.20.240:6443: connect: connection refused" node="ip-172-31-20-240" Sep 12 10:12:36.806719 containerd[1910]: time="2025-09-12T10:12:36.806664346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-240,Uid:ae6c99f83eabe3e9cbdedf0c242e6cf9,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:36.816812 containerd[1910]: time="2025-09-12T10:12:36.816744965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-240,Uid:f93b2699fa2350ed0747db26657c68d8,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:36.822566 containerd[1910]: time="2025-09-12T10:12:36.822528573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-240,Uid:48e9657f5af54b5ab68f67891b544b8c,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:36.964896 kubelet[2816]: E0912 10:12:36.964849 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": dial tcp 172.31.20.240:6443: connect: connection refused" interval="800ms" Sep 12 10:12:37.153882 kubelet[2816]: I0912 10:12:37.153483 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:37.154473 kubelet[2816]: E0912 10:12:37.154103 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.240:6443/api/v1/nodes\": dial tcp 172.31.20.240:6443: connect: connection refused" node="ip-172-31-20-240" Sep 12 10:12:37.247613 kubelet[2816]: E0912 10:12:37.247565 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:12:37.283970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430503426.mount: Deactivated successfully. Sep 12 10:12:37.294445 containerd[1910]: time="2025-09-12T10:12:37.294395523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:12:37.299933 containerd[1910]: time="2025-09-12T10:12:37.299671564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 10:12:37.301175 containerd[1910]: time="2025-09-12T10:12:37.301131108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:12:37.306474 containerd[1910]: time="2025-09-12T10:12:37.304340942Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:12:37.308846 containerd[1910]: time="2025-09-12T10:12:37.308789509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:12:37.311124 containerd[1910]: time="2025-09-12T10:12:37.311075294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:12:37.312139 containerd[1910]: time="2025-09-12T10:12:37.312026512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:12:37.314384 containerd[1910]: time="2025-09-12T10:12:37.313295053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:12:37.314384 containerd[1910]: time="2025-09-12T10:12:37.314048380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.850202ms" Sep 12 10:12:37.315020 containerd[1910]: time="2025-09-12T10:12:37.314982277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.150932ms" Sep 12 10:12:37.321870 containerd[1910]: time="2025-09-12T10:12:37.321832298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.233297ms" Sep 12 10:12:37.483159 kubelet[2816]: E0912 10:12:37.483037 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:12:37.591897 containerd[1910]: time="2025-09-12T10:12:37.590512880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:37.591897 containerd[1910]: time="2025-09-12T10:12:37.591693612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:37.591897 containerd[1910]: time="2025-09-12T10:12:37.591711400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.591897 containerd[1910]: time="2025-09-12T10:12:37.591797550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.596477 containerd[1910]: time="2025-09-12T10:12:37.594871949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:37.596477 containerd[1910]: time="2025-09-12T10:12:37.594946577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:37.596477 containerd[1910]: time="2025-09-12T10:12:37.594964617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.596477 containerd[1910]: time="2025-09-12T10:12:37.595069389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.609777 containerd[1910]: time="2025-09-12T10:12:37.609592581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:37.611875 containerd[1910]: time="2025-09-12T10:12:37.609834278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:37.611875 containerd[1910]: time="2025-09-12T10:12:37.609895479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.611875 containerd[1910]: time="2025-09-12T10:12:37.610066026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:37.616828 kubelet[2816]: E0912 10:12:37.614486 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-240&limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:12:37.634661 systemd[1]: Started cri-containerd-b46b4d5173dc325e558e29e63b4ec97c98296993c101509d31d7b0411ce09567.scope - libcontainer container b46b4d5173dc325e558e29e63b4ec97c98296993c101509d31d7b0411ce09567. Sep 12 10:12:37.646624 systemd[1]: Started cri-containerd-524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b.scope - libcontainer container 524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b. Sep 12 10:12:37.665154 systemd[1]: Started cri-containerd-4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384.scope - libcontainer container 4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384. Sep 12 10:12:37.747502 containerd[1910]: time="2025-09-12T10:12:37.746486910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-240,Uid:f93b2699fa2350ed0747db26657c68d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384\"" Sep 12 10:12:37.750389 containerd[1910]: time="2025-09-12T10:12:37.750242234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-240,Uid:ae6c99f83eabe3e9cbdedf0c242e6cf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b46b4d5173dc325e558e29e63b4ec97c98296993c101509d31d7b0411ce09567\"" Sep 12 10:12:37.762353 containerd[1910]: time="2025-09-12T10:12:37.762188884Z" level=info msg="CreateContainer within sandbox \"b46b4d5173dc325e558e29e63b4ec97c98296993c101509d31d7b0411ce09567\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:12:37.766137 kubelet[2816]: E0912 10:12:37.765935 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": dial tcp 172.31.20.240:6443: connect: connection refused" interval="1.6s" Sep 12 10:12:37.766679 containerd[1910]: time="2025-09-12T10:12:37.766644386Z" level=info msg="CreateContainer within sandbox \"4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:12:37.767001 containerd[1910]: time="2025-09-12T10:12:37.766801269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-240,Uid:48e9657f5af54b5ab68f67891b544b8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b\"" Sep 12 10:12:37.775330 containerd[1910]: time="2025-09-12T10:12:37.775290179Z" level=info msg="CreateContainer within sandbox \"524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:12:37.782342 kubelet[2816]: E0912 10:12:37.782296 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:12:37.808344 containerd[1910]: time="2025-09-12T10:12:37.808290226Z" level=info msg="CreateContainer within sandbox \"b46b4d5173dc325e558e29e63b4ec97c98296993c101509d31d7b0411ce09567\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ec99f2626e1b162ebb3a4f7966d9579de9e0d9c5ab489791ff606b8318d2e2f\"" Sep 12 10:12:37.809080 containerd[1910]: time="2025-09-12T10:12:37.809033883Z" level=info msg="StartContainer for \"6ec99f2626e1b162ebb3a4f7966d9579de9e0d9c5ab489791ff606b8318d2e2f\"" Sep 12 10:12:37.822403 containerd[1910]: time="2025-09-12T10:12:37.822265743Z" level=info msg="CreateContainer within sandbox \"4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3\"" Sep 12 10:12:37.824195 containerd[1910]: time="2025-09-12T10:12:37.824148984Z" level=info msg="StartContainer for \"900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3\"" Sep 12 10:12:37.826651 containerd[1910]: time="2025-09-12T10:12:37.826521561Z" level=info msg="CreateContainer within sandbox \"524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d\"" Sep 12 10:12:37.827380 containerd[1910]: time="2025-09-12T10:12:37.827351154Z" level=info msg="StartContainer for \"c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d\"" Sep 12 10:12:37.855282 systemd[1]: Started cri-containerd-6ec99f2626e1b162ebb3a4f7966d9579de9e0d9c5ab489791ff606b8318d2e2f.scope - libcontainer container 6ec99f2626e1b162ebb3a4f7966d9579de9e0d9c5ab489791ff606b8318d2e2f. Sep 12 10:12:37.890725 systemd[1]: Started cri-containerd-900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3.scope - libcontainer container 900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3. Sep 12 10:12:37.901712 systemd[1]: Started cri-containerd-c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d.scope - libcontainer container c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d. Sep 12 10:12:37.952793 containerd[1910]: time="2025-09-12T10:12:37.952669915Z" level=info msg="StartContainer for \"6ec99f2626e1b162ebb3a4f7966d9579de9e0d9c5ab489791ff606b8318d2e2f\" returns successfully" Sep 12 10:12:37.957269 kubelet[2816]: I0912 10:12:37.957205 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:37.959170 kubelet[2816]: E0912 10:12:37.958630 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.240:6443/api/v1/nodes\": dial tcp 172.31.20.240:6443: connect: connection refused" node="ip-172-31-20-240" Sep 12 10:12:37.984258 containerd[1910]: time="2025-09-12T10:12:37.984202075Z" level=info msg="StartContainer for \"900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3\" returns successfully" Sep 12 10:12:38.003688 containerd[1910]: time="2025-09-12T10:12:38.003565308Z" level=info msg="StartContainer for \"c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d\" returns successfully" Sep 12 10:12:38.310002 kubelet[2816]: E0912 10:12:38.309882 2816 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:12:38.398264 kubelet[2816]: E0912 10:12:38.398228 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:38.398719 kubelet[2816]: E0912 10:12:38.398694 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:38.403795 kubelet[2816]: E0912 10:12:38.403763 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:39.367481 kubelet[2816]: E0912 10:12:39.367395 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": dial tcp 172.31.20.240:6443: connect: connection refused" interval="3.2s" Sep 12 10:12:39.404503 kubelet[2816]: E0912 10:12:39.403762 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:12:39.404503 kubelet[2816]: E0912 10:12:39.404119 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:39.404692 kubelet[2816]: E0912 10:12:39.404549 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:39.522379 kubelet[2816]: E0912 10:12:39.522314 2816 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.240:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:12:39.560619 kubelet[2816]: I0912 10:12:39.560556 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:39.563091 kubelet[2816]: E0912 10:12:39.563055 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.240:6443/api/v1/nodes\": dial tcp 172.31.20.240:6443: connect: connection refused" node="ip-172-31-20-240" Sep 12 10:12:40.405015 kubelet[2816]: E0912 10:12:40.404984 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:41.864393 kubelet[2816]: E0912 10:12:41.864358 2816 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-240" not found Sep 12 10:12:42.133816 kubelet[2816]: E0912 10:12:42.133708 2816 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:42.213037 kubelet[2816]: E0912 10:12:42.212994 2816 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-240" not found Sep 12 10:12:42.324726 kubelet[2816]: I0912 10:12:42.324512 2816 apiserver.go:52] "Watching apiserver" Sep 12 10:12:42.360957 kubelet[2816]: I0912 10:12:42.360499 2816 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:12:42.571847 kubelet[2816]: E0912 10:12:42.571681 2816 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-240\" not found" node="ip-172-31-20-240" Sep 12 10:12:42.643676 kubelet[2816]: E0912 10:12:42.643642 2816 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-240" not found Sep 12 10:12:42.765119 kubelet[2816]: I0912 10:12:42.765069 2816 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:42.779733 kubelet[2816]: I0912 10:12:42.779583 2816 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-240" Sep 12 10:12:42.853832 kubelet[2816]: I0912 10:12:42.853790 2816 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:42.875434 kubelet[2816]: I0912 10:12:42.875310 2816 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:42.882435 kubelet[2816]: I0912 10:12:42.882213 2816 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:43.820110 systemd[1]: Reload requested from client PID 3100 ('systemctl') (unit session-9.scope)... Sep 12 10:12:43.820126 systemd[1]: Reloading... Sep 12 10:12:43.926879 zram_generator::config[3145]: No configuration found. Sep 12 10:12:44.083704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:12:44.219524 systemd[1]: Reloading finished in 398 ms. Sep 12 10:12:44.246615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:44.262047 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:12:44.262347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:44.262430 systemd[1]: kubelet.service: Consumed 1.331s CPU time, 130.6M memory peak. Sep 12 10:12:44.269879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:12:44.551075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:12:44.555795 (kubelet)[3205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:12:44.636502 kubelet[3205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:12:44.636502 kubelet[3205]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:12:44.636502 kubelet[3205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:12:44.636502 kubelet[3205]: I0912 10:12:44.636278 3205 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:12:44.650578 kubelet[3205]: I0912 10:12:44.650533 3205 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:12:44.650578 kubelet[3205]: I0912 10:12:44.650570 3205 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:12:44.651044 kubelet[3205]: I0912 10:12:44.651011 3205 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:12:44.654827 kubelet[3205]: I0912 10:12:44.654084 3205 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 10:12:44.661122 kubelet[3205]: I0912 10:12:44.661090 3205 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:12:44.663762 kubelet[3205]: E0912 10:12:44.663723 3205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:12:44.664305 kubelet[3205]: I0912 10:12:44.664291 3205 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:12:44.674166 kubelet[3205]: I0912 10:12:44.674125 3205 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:12:44.674987 kubelet[3205]: I0912 10:12:44.674447 3205 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:12:44.678740 kubelet[3205]: I0912 10:12:44.674549 3205 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:12:44.678740 kubelet[3205]: I0912 10:12:44.678613 3205 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:12:44.678740 kubelet[3205]: I0912 10:12:44.678630 3205 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:12:44.679636 kubelet[3205]: I0912 10:12:44.679062 3205 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:12:44.679636 kubelet[3205]: I0912 10:12:44.679304 3205 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:12:44.680473 kubelet[3205]: I0912 10:12:44.679729 3205 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:12:44.680473 kubelet[3205]: I0912 10:12:44.679782 3205 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:12:44.680473 kubelet[3205]: I0912 10:12:44.679801 3205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:12:44.682864 kubelet[3205]: I0912 10:12:44.682840 3205 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:12:44.683873 kubelet[3205]: I0912 10:12:44.683853 3205 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:12:44.698614 kubelet[3205]: I0912 10:12:44.698588 3205 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:12:44.698929 kubelet[3205]: I0912 10:12:44.698915 3205 server.go:1289] "Started kubelet" Sep 12 10:12:44.712425 kubelet[3205]: I0912 10:12:44.711930 3205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:12:44.712425 kubelet[3205]: I0912 10:12:44.712292 3205 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:12:44.712425 kubelet[3205]: I0912 10:12:44.712349 3205 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:12:44.718698 kubelet[3205]: I0912 10:12:44.718670 3205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:12:44.720131 kubelet[3205]: I0912 10:12:44.720065 3205 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:12:44.726495 kubelet[3205]: I0912 10:12:44.726017 3205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:12:44.727550 kubelet[3205]: I0912 10:12:44.726899 3205 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:12:44.727550 kubelet[3205]: I0912 10:12:44.727433 3205 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:12:44.727633 kubelet[3205]: I0912 10:12:44.727600 3205 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:12:44.732676 kubelet[3205]: I0912 10:12:44.732533 3205 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:12:44.732676 kubelet[3205]: I0912 10:12:44.732657 3205 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:12:44.736427 kubelet[3205]: E0912 10:12:44.736389 3205 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:12:44.743063 kubelet[3205]: I0912 10:12:44.739756 3205 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:12:44.759742 kubelet[3205]: I0912 10:12:44.758493 3205 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:12:44.760895 kubelet[3205]: I0912 10:12:44.760793 3205 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:12:44.767304 kubelet[3205]: I0912 10:12:44.767037 3205 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:12:44.768514 kubelet[3205]: I0912 10:12:44.768412 3205 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:12:44.772814 kubelet[3205]: I0912 10:12:44.772307 3205 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:12:44.772814 kubelet[3205]: E0912 10:12:44.772388 3205 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:12:44.842405 kubelet[3205]: I0912 10:12:44.842151 3205 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:12:44.842999 kubelet[3205]: I0912 10:12:44.842967 3205 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:12:44.843513 kubelet[3205]: I0912 10:12:44.843491 3205 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:12:44.843811 kubelet[3205]: I0912 10:12:44.843775 3205 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:12:44.844043 kubelet[3205]: I0912 10:12:44.844001 3205 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:12:44.844578 kubelet[3205]: I0912 10:12:44.844484 3205 policy_none.go:49] "None policy: Start" Sep 12 10:12:44.844675 kubelet[3205]: I0912 10:12:44.844658 3205 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:12:44.844790 kubelet[3205]: I0912 10:12:44.844758 3205 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:12:44.845224 kubelet[3205]: I0912 10:12:44.845055 3205 state_mem.go:75] "Updated machine memory state" Sep 12 10:12:44.861469 kubelet[3205]: E0912 10:12:44.859895 3205 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:12:44.861469 kubelet[3205]: I0912 10:12:44.860145 3205 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:12:44.861469 kubelet[3205]: I0912 10:12:44.860164 3205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:12:44.861469 kubelet[3205]: I0912 10:12:44.860831 3205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:12:44.862631 sudo[3242]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:12:44.863148 sudo[3242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:12:44.870357 kubelet[3205]: E0912 10:12:44.868236 3205 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:12:44.874569 kubelet[3205]: I0912 10:12:44.873283 3205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.874569 kubelet[3205]: I0912 10:12:44.873950 3205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:44.874569 kubelet[3205]: I0912 10:12:44.874282 3205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:44.900145 kubelet[3205]: E0912 10:12:44.899359 3205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-240\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.906485 kubelet[3205]: E0912 10:12:44.906314 3205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-240\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:44.906485 kubelet[3205]: E0912 10:12:44.906410 3205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-240\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:44.928916 kubelet[3205]: I0912 10:12:44.928881 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48e9657f5af54b5ab68f67891b544b8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-240\" (UID: \"48e9657f5af54b5ab68f67891b544b8c\") " pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:44.928916 kubelet[3205]: I0912 10:12:44.928920 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-ca-certs\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:44.929076 kubelet[3205]: I0912 10:12:44.928937 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.929076 kubelet[3205]: I0912 10:12:44.928956 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.929076 kubelet[3205]: I0912 10:12:44.928985 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.929076 kubelet[3205]: I0912 10:12:44.929003 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.929076 kubelet[3205]: I0912 10:12:44.929033 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:44.929211 kubelet[3205]: I0912 10:12:44.929050 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae6c99f83eabe3e9cbdedf0c242e6cf9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-240\" (UID: \"ae6c99f83eabe3e9cbdedf0c242e6cf9\") " pod="kube-system/kube-apiserver-ip-172-31-20-240" Sep 12 10:12:44.929211 kubelet[3205]: I0912 10:12:44.929070 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f93b2699fa2350ed0747db26657c68d8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-240\" (UID: \"f93b2699fa2350ed0747db26657c68d8\") " pod="kube-system/kube-controller-manager-ip-172-31-20-240" Sep 12 10:12:44.980074 kubelet[3205]: I0912 10:12:44.979547 3205 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-240" Sep 12 10:12:44.997088 kubelet[3205]: I0912 10:12:44.997039 3205 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-240" Sep 12 10:12:44.997275 kubelet[3205]: I0912 10:12:44.997244 3205 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-240" Sep 12 10:12:45.039142 update_engine[1902]: I20250912 10:12:45.039067 1902 update_attempter.cc:509] Updating boot flags... Sep 12 10:12:45.136489 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3260) Sep 12 10:12:45.530674 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3261) Sep 12 10:12:45.695183 kubelet[3205]: I0912 10:12:45.695064 3205 apiserver.go:52] "Watching apiserver" Sep 12 10:12:45.728591 kubelet[3205]: I0912 10:12:45.728381 3205 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:12:45.808497 kubelet[3205]: I0912 10:12:45.805055 3205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:45.828222 kubelet[3205]: E0912 10:12:45.828174 3205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-240\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-240" Sep 12 10:12:45.887526 kubelet[3205]: I0912 10:12:45.887439 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-240" podStartSLOduration=3.887417682 podStartE2EDuration="3.887417682s" podCreationTimestamp="2025-09-12 10:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:12:45.884775921 +0000 UTC m=+1.321442938" watchObservedRunningTime="2025-09-12 10:12:45.887417682 +0000 UTC m=+1.324084700" Sep 12 10:12:45.888118 kubelet[3205]: I0912 10:12:45.888055 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-240" podStartSLOduration=3.8880412030000002 podStartE2EDuration="3.888041203s" podCreationTimestamp="2025-09-12 10:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:12:45.86315988 +0000 UTC m=+1.299826898" watchObservedRunningTime="2025-09-12 10:12:45.888041203 +0000 UTC m=+1.324708220" Sep 12 10:12:45.922247 kubelet[3205]: I0912 10:12:45.921993 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-240" podStartSLOduration=3.921970216 podStartE2EDuration="3.921970216s" podCreationTimestamp="2025-09-12 10:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:12:45.904398442 +0000 UTC m=+1.341065460" watchObservedRunningTime="2025-09-12 10:12:45.921970216 +0000 UTC m=+1.358637234" Sep 12 10:12:45.992770 sudo[3242]: pam_unix(sudo:session): session closed for user root Sep 12 10:12:47.696092 sudo[2255]: pam_unix(sudo:session): session closed for user root Sep 12 10:12:47.718424 sshd[2254]: Connection closed by 147.75.109.163 port 46508 Sep 12 10:12:47.719750 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:47.722999 systemd[1]: sshd@8-172.31.20.240:22-147.75.109.163:46508.service: Deactivated successfully. Sep 12 10:12:47.725078 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:12:47.725279 systemd[1]: session-9.scope: Consumed 5.943s CPU time, 208.9M memory peak. Sep 12 10:12:47.727980 systemd-logind[1901]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:12:47.729269 systemd-logind[1901]: Removed session 9. Sep 12 10:12:49.388115 kubelet[3205]: I0912 10:12:49.387847 3205 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:12:49.388569 containerd[1910]: time="2025-09-12T10:12:49.388538099Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:12:49.388890 kubelet[3205]: I0912 10:12:49.388873 3205 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:12:50.343512 systemd[1]: Created slice kubepods-besteffort-pode5c92934_4315_47d9_a3e7_ebbb9ec6161c.slice - libcontainer container kubepods-besteffort-pode5c92934_4315_47d9_a3e7_ebbb9ec6161c.slice. Sep 12 10:12:50.359669 systemd[1]: Created slice kubepods-burstable-pod06063492_233b_494d_acd5_152ffdacab1c.slice - libcontainer container kubepods-burstable-pod06063492_233b_494d_acd5_152ffdacab1c.slice. Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364711 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-hostproc\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364747 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-etc-cni-netd\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364768 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06063492-233b-494d-acd5-152ffdacab1c-clustermesh-secrets\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364784 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06063492-233b-494d-acd5-152ffdacab1c-cilium-config-path\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364800 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-hubble-tls\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365124 kubelet[3205]: I0912 10:12:50.364817 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cni-path\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365398 kubelet[3205]: I0912 10:12:50.364831 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-net\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365398 kubelet[3205]: I0912 10:12:50.364846 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2q5\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-kube-api-access-8x2q5\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365398 kubelet[3205]: I0912 10:12:50.364861 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5c92934-4315-47d9-a3e7-ebbb9ec6161c-kube-proxy\") pod \"kube-proxy-twxrc\" (UID: \"e5c92934-4315-47d9-a3e7-ebbb9ec6161c\") " pod="kube-system/kube-proxy-twxrc" Sep 12 10:12:50.365398 kubelet[3205]: I0912 10:12:50.364875 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2fn\" (UniqueName: \"kubernetes.io/projected/e5c92934-4315-47d9-a3e7-ebbb9ec6161c-kube-api-access-mb2fn\") pod \"kube-proxy-twxrc\" (UID: \"e5c92934-4315-47d9-a3e7-ebbb9ec6161c\") " pod="kube-system/kube-proxy-twxrc" Sep 12 10:12:50.365398 kubelet[3205]: I0912 10:12:50.364890 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-lib-modules\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364904 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-kernel\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364920 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5c92934-4315-47d9-a3e7-ebbb9ec6161c-lib-modules\") pod \"kube-proxy-twxrc\" (UID: \"e5c92934-4315-47d9-a3e7-ebbb9ec6161c\") " pod="kube-system/kube-proxy-twxrc" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364935 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-run\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364950 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-bpf-maps\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364968 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-cgroup\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365555 kubelet[3205]: I0912 10:12:50.364982 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-xtables-lock\") pod \"cilium-2vkjt\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " pod="kube-system/cilium-2vkjt" Sep 12 10:12:50.365699 kubelet[3205]: I0912 10:12:50.365001 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5c92934-4315-47d9-a3e7-ebbb9ec6161c-xtables-lock\") pod \"kube-proxy-twxrc\" (UID: \"e5c92934-4315-47d9-a3e7-ebbb9ec6161c\") " pod="kube-system/kube-proxy-twxrc" Sep 12 10:12:50.643257 systemd[1]: Created slice kubepods-besteffort-pod36a3c811_486b_4e8a_81dc_aa84690fd247.slice - libcontainer container kubepods-besteffort-pod36a3c811_486b_4e8a_81dc_aa84690fd247.slice. Sep 12 10:12:50.657906 containerd[1910]: time="2025-09-12T10:12:50.657867179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twxrc,Uid:e5c92934-4315-47d9-a3e7-ebbb9ec6161c,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:50.663786 containerd[1910]: time="2025-09-12T10:12:50.663739549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vkjt,Uid:06063492-233b-494d-acd5-152ffdacab1c,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:50.668132 kubelet[3205]: I0912 10:12:50.668016 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6kw\" (UniqueName: \"kubernetes.io/projected/36a3c811-486b-4e8a-81dc-aa84690fd247-kube-api-access-ks6kw\") pod \"cilium-operator-6c4d7847fc-c6xgs\" (UID: \"36a3c811-486b-4e8a-81dc-aa84690fd247\") " pod="kube-system/cilium-operator-6c4d7847fc-c6xgs" Sep 12 10:12:50.668132 kubelet[3205]: I0912 10:12:50.668077 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3c811-486b-4e8a-81dc-aa84690fd247-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-c6xgs\" (UID: \"36a3c811-486b-4e8a-81dc-aa84690fd247\") " pod="kube-system/cilium-operator-6c4d7847fc-c6xgs" Sep 12 10:12:50.735619 containerd[1910]: time="2025-09-12T10:12:50.734664889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:50.735619 containerd[1910]: time="2025-09-12T10:12:50.734749339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:50.735619 containerd[1910]: time="2025-09-12T10:12:50.734770515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:50.735619 containerd[1910]: time="2025-09-12T10:12:50.734881112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:50.788182 systemd[1]: Started cri-containerd-d3d40de97bb808c90fdbb9a6dd5eb0489a6bb600300f4dbcba0cc0e500639cc2.scope - libcontainer container d3d40de97bb808c90fdbb9a6dd5eb0489a6bb600300f4dbcba0cc0e500639cc2. Sep 12 10:12:50.827893 containerd[1910]: time="2025-09-12T10:12:50.826994065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:50.827893 containerd[1910]: time="2025-09-12T10:12:50.827639484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:50.827893 containerd[1910]: time="2025-09-12T10:12:50.827662802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:50.829810 containerd[1910]: time="2025-09-12T10:12:50.829662752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:50.862982 systemd[1]: Started cri-containerd-551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9.scope - libcontainer container 551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9. Sep 12 10:12:50.904160 containerd[1910]: time="2025-09-12T10:12:50.903867740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twxrc,Uid:e5c92934-4315-47d9-a3e7-ebbb9ec6161c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3d40de97bb808c90fdbb9a6dd5eb0489a6bb600300f4dbcba0cc0e500639cc2\"" Sep 12 10:12:50.916683 containerd[1910]: time="2025-09-12T10:12:50.916635047Z" level=info msg="CreateContainer within sandbox \"d3d40de97bb808c90fdbb9a6dd5eb0489a6bb600300f4dbcba0cc0e500639cc2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:12:50.923026 containerd[1910]: time="2025-09-12T10:12:50.922989774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vkjt,Uid:06063492-233b-494d-acd5-152ffdacab1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\"" Sep 12 10:12:50.929633 containerd[1910]: time="2025-09-12T10:12:50.929399876Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:12:50.949153 containerd[1910]: time="2025-09-12T10:12:50.949102771Z" level=info msg="CreateContainer within sandbox \"d3d40de97bb808c90fdbb9a6dd5eb0489a6bb600300f4dbcba0cc0e500639cc2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0029739fb50975052d1226451c454bacdddffa40afe6f804e6965f6ba85a1faa\"" Sep 12 10:12:50.949977 containerd[1910]: time="2025-09-12T10:12:50.949753680Z" level=info msg="StartContainer for \"0029739fb50975052d1226451c454bacdddffa40afe6f804e6965f6ba85a1faa\"" Sep 12 10:12:50.955276 containerd[1910]: time="2025-09-12T10:12:50.954868138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c6xgs,Uid:36a3c811-486b-4e8a-81dc-aa84690fd247,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:50.987717 systemd[1]: Started cri-containerd-0029739fb50975052d1226451c454bacdddffa40afe6f804e6965f6ba85a1faa.scope - libcontainer container 0029739fb50975052d1226451c454bacdddffa40afe6f804e6965f6ba85a1faa. Sep 12 10:12:51.009085 containerd[1910]: time="2025-09-12T10:12:51.008882735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:51.009432 containerd[1910]: time="2025-09-12T10:12:51.009367907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:51.009657 containerd[1910]: time="2025-09-12T10:12:51.009602295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:51.010080 containerd[1910]: time="2025-09-12T10:12:51.010015728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:51.035744 systemd[1]: Started cri-containerd-e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7.scope - libcontainer container e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7. Sep 12 10:12:51.057024 containerd[1910]: time="2025-09-12T10:12:51.056704797Z" level=info msg="StartContainer for \"0029739fb50975052d1226451c454bacdddffa40afe6f804e6965f6ba85a1faa\" returns successfully" Sep 12 10:12:51.093396 containerd[1910]: time="2025-09-12T10:12:51.093355393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c6xgs,Uid:36a3c811-486b-4e8a-81dc-aa84690fd247,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\"" Sep 12 10:12:54.189320 kubelet[3205]: I0912 10:12:54.187106 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-twxrc" podStartSLOduration=4.185249625 podStartE2EDuration="4.185249625s" podCreationTimestamp="2025-09-12 10:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:12:51.843059572 +0000 UTC m=+7.279726609" watchObservedRunningTime="2025-09-12 10:12:54.185249625 +0000 UTC m=+9.621916643" Sep 12 10:12:56.500930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978921900.mount: Deactivated successfully. Sep 12 10:12:59.194717 containerd[1910]: time="2025-09-12T10:12:59.194540276Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:59.203121 containerd[1910]: time="2025-09-12T10:12:59.203019441Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:12:59.207240 containerd[1910]: time="2025-09-12T10:12:59.205910738Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:12:59.207857 containerd[1910]: time="2025-09-12T10:12:59.207815485Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.278365398s" Sep 12 10:12:59.207974 containerd[1910]: time="2025-09-12T10:12:59.207862736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:12:59.209388 containerd[1910]: time="2025-09-12T10:12:59.209361290Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:12:59.214574 containerd[1910]: time="2025-09-12T10:12:59.214532457Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:12:59.295641 containerd[1910]: time="2025-09-12T10:12:59.295594617Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\"" Sep 12 10:12:59.296316 containerd[1910]: time="2025-09-12T10:12:59.296279138Z" level=info msg="StartContainer for \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\"" Sep 12 10:12:59.459777 systemd[1]: Started cri-containerd-4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902.scope - libcontainer container 4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902. Sep 12 10:12:59.495594 containerd[1910]: time="2025-09-12T10:12:59.495553909Z" level=info msg="StartContainer for \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\" returns successfully" Sep 12 10:12:59.510541 systemd[1]: cri-containerd-4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902.scope: Deactivated successfully. Sep 12 10:12:59.724817 containerd[1910]: time="2025-09-12T10:12:59.711247893Z" level=info msg="shim disconnected" id=4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902 namespace=k8s.io Sep 12 10:12:59.724817 containerd[1910]: time="2025-09-12T10:12:59.724569946Z" level=warning msg="cleaning up after shim disconnected" id=4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902 namespace=k8s.io Sep 12 10:12:59.724817 containerd[1910]: time="2025-09-12T10:12:59.724584818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:59.868971 containerd[1910]: time="2025-09-12T10:12:59.868936808Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:12:59.898375 containerd[1910]: time="2025-09-12T10:12:59.898326440Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\"" Sep 12 10:12:59.900996 containerd[1910]: time="2025-09-12T10:12:59.900164667Z" level=info msg="StartContainer for \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\"" Sep 12 10:12:59.946914 systemd[1]: Started cri-containerd-db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67.scope - libcontainer container db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67. Sep 12 10:12:59.984698 containerd[1910]: time="2025-09-12T10:12:59.983638772Z" level=info msg="StartContainer for \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\" returns successfully" Sep 12 10:12:59.999503 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:13:00.006917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:13:00.007599 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:13:00.033683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:13:00.034067 systemd[1]: cri-containerd-db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67.scope: Deactivated successfully. Sep 12 10:13:00.080174 containerd[1910]: time="2025-09-12T10:13:00.080076345Z" level=info msg="shim disconnected" id=db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67 namespace=k8s.io Sep 12 10:13:00.080174 containerd[1910]: time="2025-09-12T10:13:00.080173044Z" level=warning msg="cleaning up after shim disconnected" id=db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67 namespace=k8s.io Sep 12 10:13:00.080174 containerd[1910]: time="2025-09-12T10:13:00.080184747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:13:00.083938 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:13:00.287578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902-rootfs.mount: Deactivated successfully. Sep 12 10:13:00.691135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211500070.mount: Deactivated successfully. Sep 12 10:13:00.874505 containerd[1910]: time="2025-09-12T10:13:00.874210935Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:13:00.940546 containerd[1910]: time="2025-09-12T10:13:00.940500879Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\"" Sep 12 10:13:00.942810 containerd[1910]: time="2025-09-12T10:13:00.941299761Z" level=info msg="StartContainer for \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\"" Sep 12 10:13:00.982204 systemd[1]: Started cri-containerd-55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d.scope - libcontainer container 55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d. Sep 12 10:13:01.042476 systemd[1]: cri-containerd-55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d.scope: Deactivated successfully. Sep 12 10:13:01.063915 containerd[1910]: time="2025-09-12T10:13:01.063872083Z" level=info msg="StartContainer for \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\" returns successfully" Sep 12 10:13:01.118124 containerd[1910]: time="2025-09-12T10:13:01.118057613Z" level=info msg="shim disconnected" id=55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d namespace=k8s.io Sep 12 10:13:01.118124 containerd[1910]: time="2025-09-12T10:13:01.118121482Z" level=warning msg="cleaning up after shim disconnected" id=55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d namespace=k8s.io Sep 12 10:13:01.118124 containerd[1910]: time="2025-09-12T10:13:01.118132419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:13:01.289744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d-rootfs.mount: Deactivated successfully. Sep 12 10:13:01.905950 containerd[1910]: time="2025-09-12T10:13:01.901341030Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:13:01.912901 containerd[1910]: time="2025-09-12T10:13:01.912178181Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:13:01.914059 containerd[1910]: time="2025-09-12T10:13:01.913908259Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:13:01.916200 containerd[1910]: time="2025-09-12T10:13:01.915792566Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:13:01.924743 containerd[1910]: time="2025-09-12T10:13:01.924546031Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.715020731s" Sep 12 10:13:01.924743 containerd[1910]: time="2025-09-12T10:13:01.924610817Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:13:01.941120 containerd[1910]: time="2025-09-12T10:13:01.940635379Z" level=info msg="CreateContainer within sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:13:01.990445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700768891.mount: Deactivated successfully. Sep 12 10:13:02.008411 containerd[1910]: time="2025-09-12T10:13:02.007866289Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\"" Sep 12 10:13:02.032520 containerd[1910]: time="2025-09-12T10:13:02.028647387Z" level=info msg="StartContainer for \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\"" Sep 12 10:13:02.032520 containerd[1910]: time="2025-09-12T10:13:02.032397478Z" level=info msg="CreateContainer within sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\"" Sep 12 10:13:02.033747 containerd[1910]: time="2025-09-12T10:13:02.033717159Z" level=info msg="StartContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\"" Sep 12 10:13:02.143769 systemd[1]: Started cri-containerd-0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa.scope - libcontainer container 0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa. Sep 12 10:13:02.155646 systemd[1]: Started cri-containerd-f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587.scope - libcontainer container f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587. Sep 12 10:13:02.305872 containerd[1910]: time="2025-09-12T10:13:02.305444231Z" level=info msg="StartContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" returns successfully" Sep 12 10:13:02.310962 systemd[1]: cri-containerd-f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587.scope: Deactivated successfully. Sep 12 10:13:02.313614 containerd[1910]: time="2025-09-12T10:13:02.313100311Z" level=info msg="StartContainer for \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\" returns successfully" Sep 12 10:13:02.369944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587-rootfs.mount: Deactivated successfully. Sep 12 10:13:02.465331 containerd[1910]: time="2025-09-12T10:13:02.464795254Z" level=info msg="shim disconnected" id=f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587 namespace=k8s.io Sep 12 10:13:02.465331 containerd[1910]: time="2025-09-12T10:13:02.464869070Z" level=warning msg="cleaning up after shim disconnected" id=f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587 namespace=k8s.io Sep 12 10:13:02.465331 containerd[1910]: time="2025-09-12T10:13:02.464882190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:13:02.896163 containerd[1910]: time="2025-09-12T10:13:02.895764602Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:13:02.924634 containerd[1910]: time="2025-09-12T10:13:02.924589189Z" level=info msg="CreateContainer within sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\"" Sep 12 10:13:02.928035 containerd[1910]: time="2025-09-12T10:13:02.926706321Z" level=info msg="StartContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\"" Sep 12 10:13:02.980555 systemd[1]: Started cri-containerd-6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673.scope - libcontainer container 6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673. Sep 12 10:13:03.089355 kubelet[3205]: I0912 10:13:03.089284 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-c6xgs" podStartSLOduration=2.256215275 podStartE2EDuration="13.089262651s" podCreationTimestamp="2025-09-12 10:12:50 +0000 UTC" firstStartedPulling="2025-09-12 10:12:51.094885826 +0000 UTC m=+6.531552824" lastFinishedPulling="2025-09-12 10:13:01.927933194 +0000 UTC m=+17.364600200" observedRunningTime="2025-09-12 10:13:02.980175208 +0000 UTC m=+18.416842225" watchObservedRunningTime="2025-09-12 10:13:03.089262651 +0000 UTC m=+18.525929671" Sep 12 10:13:03.127618 containerd[1910]: time="2025-09-12T10:13:03.127566100Z" level=info msg="StartContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" returns successfully" Sep 12 10:13:03.591329 kubelet[3205]: I0912 10:13:03.591301 3205 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 10:13:03.810290 systemd[1]: Created slice kubepods-burstable-pod3487540f_2f5f_465f_80c9_894ed62fdbb0.slice - libcontainer container kubepods-burstable-pod3487540f_2f5f_465f_80c9_894ed62fdbb0.slice. Sep 12 10:13:03.829518 kubelet[3205]: I0912 10:13:03.826938 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3487540f-2f5f-465f-80c9-894ed62fdbb0-config-volume\") pod \"coredns-674b8bbfcf-wbb7f\" (UID: \"3487540f-2f5f-465f-80c9-894ed62fdbb0\") " pod="kube-system/coredns-674b8bbfcf-wbb7f" Sep 12 10:13:03.829518 kubelet[3205]: I0912 10:13:03.827029 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blr97\" (UniqueName: \"kubernetes.io/projected/3487540f-2f5f-465f-80c9-894ed62fdbb0-kube-api-access-blr97\") pod \"coredns-674b8bbfcf-wbb7f\" (UID: \"3487540f-2f5f-465f-80c9-894ed62fdbb0\") " pod="kube-system/coredns-674b8bbfcf-wbb7f" Sep 12 10:13:03.833169 systemd[1]: Created slice kubepods-burstable-podd44f4f2e_08e9_4df8_bed8_13886e659bab.slice - libcontainer container kubepods-burstable-podd44f4f2e_08e9_4df8_bed8_13886e659bab.slice. Sep 12 10:13:03.927927 kubelet[3205]: I0912 10:13:03.927804 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d44f4f2e-08e9-4df8-bed8-13886e659bab-config-volume\") pod \"coredns-674b8bbfcf-sbrtx\" (UID: \"d44f4f2e-08e9-4df8-bed8-13886e659bab\") " pod="kube-system/coredns-674b8bbfcf-sbrtx" Sep 12 10:13:03.929568 kubelet[3205]: I0912 10:13:03.929529 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6p7b\" (UniqueName: \"kubernetes.io/projected/d44f4f2e-08e9-4df8-bed8-13886e659bab-kube-api-access-t6p7b\") pod \"coredns-674b8bbfcf-sbrtx\" (UID: \"d44f4f2e-08e9-4df8-bed8-13886e659bab\") " pod="kube-system/coredns-674b8bbfcf-sbrtx" Sep 12 10:13:04.135095 containerd[1910]: time="2025-09-12T10:13:04.134658854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wbb7f,Uid:3487540f-2f5f-465f-80c9-894ed62fdbb0,Namespace:kube-system,Attempt:0,}" Sep 12 10:13:04.140230 containerd[1910]: time="2025-09-12T10:13:04.139777446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbrtx,Uid:d44f4f2e-08e9-4df8-bed8-13886e659bab,Namespace:kube-system,Attempt:0,}" Sep 12 10:13:06.172717 systemd-networkd[1833]: cilium_host: Link UP Sep 12 10:13:06.172896 systemd-networkd[1833]: cilium_net: Link UP Sep 12 10:13:06.174350 systemd-networkd[1833]: cilium_net: Gained carrier Sep 12 10:13:06.174655 systemd-networkd[1833]: cilium_host: Gained carrier Sep 12 10:13:06.180505 (udev-worker)[4232]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:13:06.180506 (udev-worker)[4193]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:13:06.333606 (udev-worker)[4238]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:13:06.339813 systemd-networkd[1833]: cilium_vxlan: Link UP Sep 12 10:13:06.339823 systemd-networkd[1833]: cilium_vxlan: Gained carrier Sep 12 10:13:06.992495 kernel: NET: Registered PF_ALG protocol family Sep 12 10:13:07.104621 systemd-networkd[1833]: cilium_net: Gained IPv6LL Sep 12 10:13:07.105778 systemd-networkd[1833]: cilium_host: Gained IPv6LL Sep 12 10:13:07.809337 (udev-worker)[4239]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:13:07.816218 systemd-networkd[1833]: cilium_vxlan: Gained IPv6LL Sep 12 10:13:07.828034 systemd-networkd[1833]: lxc_health: Link UP Sep 12 10:13:07.846361 systemd-networkd[1833]: lxc_health: Gained carrier Sep 12 10:13:08.288207 systemd-networkd[1833]: lxcdd9dcf27b0c7: Link UP Sep 12 10:13:08.292037 kernel: eth0: renamed from tmp08b52 Sep 12 10:13:08.309489 kernel: eth0: renamed from tmpcf10b Sep 12 10:13:08.315379 systemd-networkd[1833]: lxc6625d5c41a7b: Link UP Sep 12 10:13:08.316224 systemd-networkd[1833]: lxc6625d5c41a7b: Gained carrier Sep 12 10:13:08.318771 systemd-networkd[1833]: lxcdd9dcf27b0c7: Gained carrier Sep 12 10:13:08.698237 kubelet[3205]: I0912 10:13:08.696346 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vkjt" podStartSLOduration=10.414531411 podStartE2EDuration="18.696323016s" podCreationTimestamp="2025-09-12 10:12:50 +0000 UTC" firstStartedPulling="2025-09-12 10:12:50.92738701 +0000 UTC m=+6.364054007" lastFinishedPulling="2025-09-12 10:12:59.209178614 +0000 UTC m=+14.645845612" observedRunningTime="2025-09-12 10:13:03.946389172 +0000 UTC m=+19.383056187" watchObservedRunningTime="2025-09-12 10:13:08.696323016 +0000 UTC m=+24.132990034" Sep 12 10:13:09.088645 systemd-networkd[1833]: lxc_health: Gained IPv6LL Sep 12 10:13:09.408715 systemd-networkd[1833]: lxcdd9dcf27b0c7: Gained IPv6LL Sep 12 10:13:10.112749 systemd-networkd[1833]: lxc6625d5c41a7b: Gained IPv6LL Sep 12 10:13:12.385878 ntpd[1886]: Listen normally on 8 cilium_host 192.168.0.39:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 8 cilium_host 192.168.0.39:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 9 cilium_net [fe80::9438:ecff:fe26:cacd%4]:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 10 cilium_host [fe80::c45b:2aff:fe8b:ef16%5]:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 11 cilium_vxlan [fe80::f0f1:26ff:fe4f:9337%6]:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 12 lxc_health [fe80::708d:c3ff:feeb:a53f%8]:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 13 lxcdd9dcf27b0c7 [fe80::80a8:64ff:fec2:3584%10]:123 Sep 12 10:13:12.386731 ntpd[1886]: 12 Sep 10:13:12 ntpd[1886]: Listen normally on 14 lxc6625d5c41a7b [fe80::482d:8ff:fecf:1fd%12]:123 Sep 12 10:13:12.385983 ntpd[1886]: Listen normally on 9 cilium_net [fe80::9438:ecff:fe26:cacd%4]:123 Sep 12 10:13:12.386042 ntpd[1886]: Listen normally on 10 cilium_host [fe80::c45b:2aff:fe8b:ef16%5]:123 Sep 12 10:13:12.386083 ntpd[1886]: Listen normally on 11 cilium_vxlan [fe80::f0f1:26ff:fe4f:9337%6]:123 Sep 12 10:13:12.386125 ntpd[1886]: Listen normally on 12 lxc_health [fe80::708d:c3ff:feeb:a53f%8]:123 Sep 12 10:13:12.386164 ntpd[1886]: Listen normally on 13 lxcdd9dcf27b0c7 [fe80::80a8:64ff:fec2:3584%10]:123 Sep 12 10:13:12.386203 ntpd[1886]: Listen normally on 14 lxc6625d5c41a7b [fe80::482d:8ff:fecf:1fd%12]:123 Sep 12 10:13:12.760983 containerd[1910]: time="2025-09-12T10:13:12.758389714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:13:12.760983 containerd[1910]: time="2025-09-12T10:13:12.758601575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:13:12.760983 containerd[1910]: time="2025-09-12T10:13:12.758649274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:13:12.760983 containerd[1910]: time="2025-09-12T10:13:12.758863583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:13:12.767737 containerd[1910]: time="2025-09-12T10:13:12.767395116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:13:12.767737 containerd[1910]: time="2025-09-12T10:13:12.767516979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:13:12.767737 containerd[1910]: time="2025-09-12T10:13:12.767540760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:13:12.767737 containerd[1910]: time="2025-09-12T10:13:12.767641907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:13:12.839702 systemd[1]: Started cri-containerd-cf10b6eff4d137a53c9f4aa08833b9bf4a299b90bc147208e4b848e0776602d4.scope - libcontainer container cf10b6eff4d137a53c9f4aa08833b9bf4a299b90bc147208e4b848e0776602d4. Sep 12 10:13:12.874761 systemd[1]: Started cri-containerd-08b52236426a5d45c1a5f2b9d3376a708e24c8834cb7ba80b1db8bc1f9298c23.scope - libcontainer container 08b52236426a5d45c1a5f2b9d3376a708e24c8834cb7ba80b1db8bc1f9298c23. Sep 12 10:13:12.986771 containerd[1910]: time="2025-09-12T10:13:12.986720863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wbb7f,Uid:3487540f-2f5f-465f-80c9-894ed62fdbb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"08b52236426a5d45c1a5f2b9d3376a708e24c8834cb7ba80b1db8bc1f9298c23\"" Sep 12 10:13:13.009030 containerd[1910]: time="2025-09-12T10:13:13.008946628Z" level=info msg="CreateContainer within sandbox \"08b52236426a5d45c1a5f2b9d3376a708e24c8834cb7ba80b1db8bc1f9298c23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:13:13.014490 containerd[1910]: time="2025-09-12T10:13:13.013806965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbrtx,Uid:d44f4f2e-08e9-4df8-bed8-13886e659bab,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf10b6eff4d137a53c9f4aa08833b9bf4a299b90bc147208e4b848e0776602d4\"" Sep 12 10:13:13.021681 containerd[1910]: time="2025-09-12T10:13:13.021511084Z" level=info msg="CreateContainer within sandbox \"cf10b6eff4d137a53c9f4aa08833b9bf4a299b90bc147208e4b848e0776602d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:13:13.043883 containerd[1910]: time="2025-09-12T10:13:13.043809427Z" level=info msg="CreateContainer within sandbox \"cf10b6eff4d137a53c9f4aa08833b9bf4a299b90bc147208e4b848e0776602d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b519af9f7fc76883fc382fe2d3ca1e19293b239b2fa0f2fbcfcd999edbc48aed\"" Sep 12 10:13:13.044630 containerd[1910]: time="2025-09-12T10:13:13.044593710Z" level=info msg="StartContainer for \"b519af9f7fc76883fc382fe2d3ca1e19293b239b2fa0f2fbcfcd999edbc48aed\"" Sep 12 10:13:13.047708 containerd[1910]: time="2025-09-12T10:13:13.047603612Z" level=info msg="CreateContainer within sandbox \"08b52236426a5d45c1a5f2b9d3376a708e24c8834cb7ba80b1db8bc1f9298c23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d74a7d97716f4eeeb13cd90d81816038abbfaab861bc7d9108520997cf9936a4\"" Sep 12 10:13:13.049267 containerd[1910]: time="2025-09-12T10:13:13.049225865Z" level=info msg="StartContainer for \"d74a7d97716f4eeeb13cd90d81816038abbfaab861bc7d9108520997cf9936a4\"" Sep 12 10:13:13.080628 systemd[1]: Started cri-containerd-d74a7d97716f4eeeb13cd90d81816038abbfaab861bc7d9108520997cf9936a4.scope - libcontainer container d74a7d97716f4eeeb13cd90d81816038abbfaab861bc7d9108520997cf9936a4. Sep 12 10:13:13.083666 systemd[1]: Started cri-containerd-b519af9f7fc76883fc382fe2d3ca1e19293b239b2fa0f2fbcfcd999edbc48aed.scope - libcontainer container b519af9f7fc76883fc382fe2d3ca1e19293b239b2fa0f2fbcfcd999edbc48aed. Sep 12 10:13:13.142791 containerd[1910]: time="2025-09-12T10:13:13.142650017Z" level=info msg="StartContainer for \"b519af9f7fc76883fc382fe2d3ca1e19293b239b2fa0f2fbcfcd999edbc48aed\" returns successfully" Sep 12 10:13:13.142791 containerd[1910]: time="2025-09-12T10:13:13.142668724Z" level=info msg="StartContainer for \"d74a7d97716f4eeeb13cd90d81816038abbfaab861bc7d9108520997cf9936a4\" returns successfully" Sep 12 10:13:13.774054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215647873.mount: Deactivated successfully. Sep 12 10:13:13.941648 kubelet[3205]: I0912 10:13:13.940912 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sbrtx" podStartSLOduration=23.94089839 podStartE2EDuration="23.94089839s" podCreationTimestamp="2025-09-12 10:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:13:13.940781806 +0000 UTC m=+29.377448828" watchObservedRunningTime="2025-09-12 10:13:13.94089839 +0000 UTC m=+29.377565405" Sep 12 10:13:24.959560 kubelet[3205]: I0912 10:13:24.959179 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wbb7f" podStartSLOduration=34.959163551 podStartE2EDuration="34.959163551s" podCreationTimestamp="2025-09-12 10:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:13:13.954999125 +0000 UTC m=+29.391666123" watchObservedRunningTime="2025-09-12 10:13:24.959163551 +0000 UTC m=+40.395830567" Sep 12 10:13:28.974812 systemd[1]: Started sshd@9-172.31.20.240:22-147.75.109.163:49498.service - OpenSSH per-connection server daemon (147.75.109.163:49498). Sep 12 10:13:29.172336 sshd[4779]: Accepted publickey for core from 147.75.109.163 port 49498 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:29.175305 sshd-session[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:29.181882 systemd-logind[1901]: New session 10 of user core. Sep 12 10:13:29.188669 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:13:30.078970 sshd[4781]: Connection closed by 147.75.109.163 port 49498 Sep 12 10:13:30.080718 sshd-session[4779]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:30.084167 systemd[1]: sshd@9-172.31.20.240:22-147.75.109.163:49498.service: Deactivated successfully. Sep 12 10:13:30.086825 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:13:30.089342 systemd-logind[1901]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:13:30.091719 systemd-logind[1901]: Removed session 10. Sep 12 10:13:35.117864 systemd[1]: Started sshd@10-172.31.20.240:22-147.75.109.163:45302.service - OpenSSH per-connection server daemon (147.75.109.163:45302). Sep 12 10:13:35.288863 sshd[4794]: Accepted publickey for core from 147.75.109.163 port 45302 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:35.290421 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:35.295878 systemd-logind[1901]: New session 11 of user core. Sep 12 10:13:35.299721 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:13:35.516693 sshd[4796]: Connection closed by 147.75.109.163 port 45302 Sep 12 10:13:35.517564 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:35.521298 systemd[1]: sshd@10-172.31.20.240:22-147.75.109.163:45302.service: Deactivated successfully. Sep 12 10:13:35.525368 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:13:35.527727 systemd-logind[1901]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:13:35.528948 systemd-logind[1901]: Removed session 11. Sep 12 10:13:40.556736 systemd[1]: Started sshd@11-172.31.20.240:22-147.75.109.163:39688.service - OpenSSH per-connection server daemon (147.75.109.163:39688). Sep 12 10:13:40.716509 sshd[4810]: Accepted publickey for core from 147.75.109.163 port 39688 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:40.717962 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:40.724978 systemd-logind[1901]: New session 12 of user core. Sep 12 10:13:40.729770 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:13:40.926377 sshd[4812]: Connection closed by 147.75.109.163 port 39688 Sep 12 10:13:40.926962 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:40.930691 systemd[1]: sshd@11-172.31.20.240:22-147.75.109.163:39688.service: Deactivated successfully. Sep 12 10:13:40.933021 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:13:40.933919 systemd-logind[1901]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:13:40.935250 systemd-logind[1901]: Removed session 12. Sep 12 10:13:45.962795 systemd[1]: Started sshd@12-172.31.20.240:22-147.75.109.163:39694.service - OpenSSH per-connection server daemon (147.75.109.163:39694). Sep 12 10:13:46.166580 sshd[4828]: Accepted publickey for core from 147.75.109.163 port 39694 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:46.168084 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:46.172940 systemd-logind[1901]: New session 13 of user core. Sep 12 10:13:46.183715 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:13:46.390689 sshd[4830]: Connection closed by 147.75.109.163 port 39694 Sep 12 10:13:46.391342 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:46.395150 systemd[1]: sshd@12-172.31.20.240:22-147.75.109.163:39694.service: Deactivated successfully. Sep 12 10:13:46.397696 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:13:46.398634 systemd-logind[1901]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:13:46.399570 systemd-logind[1901]: Removed session 13. Sep 12 10:13:46.428804 systemd[1]: Started sshd@13-172.31.20.240:22-147.75.109.163:39696.service - OpenSSH per-connection server daemon (147.75.109.163:39696). Sep 12 10:13:46.584681 sshd[4843]: Accepted publickey for core from 147.75.109.163 port 39696 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:46.586061 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:46.591650 systemd-logind[1901]: New session 14 of user core. Sep 12 10:13:46.596639 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:13:46.861921 sshd[4845]: Connection closed by 147.75.109.163 port 39696 Sep 12 10:13:46.863618 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:46.871621 systemd[1]: sshd@13-172.31.20.240:22-147.75.109.163:39696.service: Deactivated successfully. Sep 12 10:13:46.877239 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:13:46.878645 systemd-logind[1901]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:13:46.879974 systemd-logind[1901]: Removed session 14. Sep 12 10:13:46.903817 systemd[1]: Started sshd@14-172.31.20.240:22-147.75.109.163:39702.service - OpenSSH per-connection server daemon (147.75.109.163:39702). Sep 12 10:13:47.091270 sshd[4855]: Accepted publickey for core from 147.75.109.163 port 39702 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:47.092653 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:47.098279 systemd-logind[1901]: New session 15 of user core. Sep 12 10:13:47.107658 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:13:47.325124 sshd[4857]: Connection closed by 147.75.109.163 port 39702 Sep 12 10:13:47.325769 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:47.329392 systemd[1]: sshd@14-172.31.20.240:22-147.75.109.163:39702.service: Deactivated successfully. Sep 12 10:13:47.331441 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:13:47.332313 systemd-logind[1901]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:13:47.333768 systemd-logind[1901]: Removed session 15. Sep 12 10:13:52.367856 systemd[1]: Started sshd@15-172.31.20.240:22-147.75.109.163:54220.service - OpenSSH per-connection server daemon (147.75.109.163:54220). Sep 12 10:13:52.528130 sshd[4869]: Accepted publickey for core from 147.75.109.163 port 54220 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:52.529430 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:52.535614 systemd-logind[1901]: New session 16 of user core. Sep 12 10:13:52.538863 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:13:52.732736 sshd[4871]: Connection closed by 147.75.109.163 port 54220 Sep 12 10:13:52.733627 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:52.736803 systemd[1]: sshd@15-172.31.20.240:22-147.75.109.163:54220.service: Deactivated successfully. Sep 12 10:13:52.738978 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:13:52.740778 systemd-logind[1901]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:13:52.742026 systemd-logind[1901]: Removed session 16. Sep 12 10:13:57.769862 systemd[1]: Started sshd@16-172.31.20.240:22-147.75.109.163:54228.service - OpenSSH per-connection server daemon (147.75.109.163:54228). Sep 12 10:13:57.930335 sshd[4885]: Accepted publickey for core from 147.75.109.163 port 54228 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:57.931714 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:57.937450 systemd-logind[1901]: New session 17 of user core. Sep 12 10:13:57.944720 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:13:58.141728 sshd[4887]: Connection closed by 147.75.109.163 port 54228 Sep 12 10:13:58.143324 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:58.147160 systemd-logind[1901]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:13:58.147894 systemd[1]: sshd@16-172.31.20.240:22-147.75.109.163:54228.service: Deactivated successfully. Sep 12 10:13:58.150287 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:13:58.151414 systemd-logind[1901]: Removed session 17. Sep 12 10:13:58.176846 systemd[1]: Started sshd@17-172.31.20.240:22-147.75.109.163:54232.service - OpenSSH per-connection server daemon (147.75.109.163:54232). Sep 12 10:13:58.339348 sshd[4899]: Accepted publickey for core from 147.75.109.163 port 54232 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:58.340856 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:58.345641 systemd-logind[1901]: New session 18 of user core. Sep 12 10:13:58.352683 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:13:59.085352 sshd[4901]: Connection closed by 147.75.109.163 port 54232 Sep 12 10:13:59.086436 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Sep 12 10:13:59.099810 systemd[1]: sshd@17-172.31.20.240:22-147.75.109.163:54232.service: Deactivated successfully. Sep 12 10:13:59.102212 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:13:59.104212 systemd-logind[1901]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:13:59.105536 systemd-logind[1901]: Removed session 18. Sep 12 10:13:59.122849 systemd[1]: Started sshd@18-172.31.20.240:22-147.75.109.163:54248.service - OpenSSH per-connection server daemon (147.75.109.163:54248). Sep 12 10:13:59.308763 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 54248 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:13:59.310265 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:13:59.316237 systemd-logind[1901]: New session 19 of user core. Sep 12 10:13:59.325718 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:14:00.289042 sshd[4913]: Connection closed by 147.75.109.163 port 54248 Sep 12 10:14:00.290616 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:00.299898 systemd[1]: sshd@18-172.31.20.240:22-147.75.109.163:54248.service: Deactivated successfully. Sep 12 10:14:00.302788 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:14:00.303922 systemd-logind[1901]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:14:00.305178 systemd-logind[1901]: Removed session 19. Sep 12 10:14:00.328857 systemd[1]: Started sshd@19-172.31.20.240:22-147.75.109.163:50366.service - OpenSSH per-connection server daemon (147.75.109.163:50366). Sep 12 10:14:00.492752 sshd[4930]: Accepted publickey for core from 147.75.109.163 port 50366 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:00.494349 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:00.508136 systemd-logind[1901]: New session 20 of user core. Sep 12 10:14:00.514704 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:14:00.889562 sshd[4932]: Connection closed by 147.75.109.163 port 50366 Sep 12 10:14:00.889433 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:00.895755 systemd[1]: sshd@19-172.31.20.240:22-147.75.109.163:50366.service: Deactivated successfully. Sep 12 10:14:00.900864 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:14:00.902121 systemd-logind[1901]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:14:00.904562 systemd-logind[1901]: Removed session 20. Sep 12 10:14:00.921340 systemd[1]: Started sshd@20-172.31.20.240:22-147.75.109.163:50370.service - OpenSSH per-connection server daemon (147.75.109.163:50370). Sep 12 10:14:01.099595 sshd[4941]: Accepted publickey for core from 147.75.109.163 port 50370 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:01.102659 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:01.109149 systemd-logind[1901]: New session 21 of user core. Sep 12 10:14:01.119745 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:14:01.587669 sshd[4943]: Connection closed by 147.75.109.163 port 50370 Sep 12 10:14:01.588838 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:01.602019 systemd[1]: sshd@20-172.31.20.240:22-147.75.109.163:50370.service: Deactivated successfully. Sep 12 10:14:01.612050 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:14:01.617124 systemd-logind[1901]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:14:01.623762 systemd-logind[1901]: Removed session 21. Sep 12 10:14:06.623824 systemd[1]: Started sshd@21-172.31.20.240:22-147.75.109.163:50382.service - OpenSSH per-connection server daemon (147.75.109.163:50382). Sep 12 10:14:06.782218 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 50382 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:06.783858 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:06.788740 systemd-logind[1901]: New session 22 of user core. Sep 12 10:14:06.792636 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:14:06.998502 sshd[4958]: Connection closed by 147.75.109.163 port 50382 Sep 12 10:14:06.999285 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:07.003114 systemd-logind[1901]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:14:07.004196 systemd[1]: sshd@21-172.31.20.240:22-147.75.109.163:50382.service: Deactivated successfully. Sep 12 10:14:07.006914 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:14:07.008304 systemd-logind[1901]: Removed session 22. Sep 12 10:14:12.041074 systemd[1]: Started sshd@22-172.31.20.240:22-147.75.109.163:55384.service - OpenSSH per-connection server daemon (147.75.109.163:55384). Sep 12 10:14:12.203940 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 55384 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:12.205276 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:12.210172 systemd-logind[1901]: New session 23 of user core. Sep 12 10:14:12.216684 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:14:12.402591 sshd[4974]: Connection closed by 147.75.109.163 port 55384 Sep 12 10:14:12.403160 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:12.406357 systemd[1]: sshd@22-172.31.20.240:22-147.75.109.163:55384.service: Deactivated successfully. Sep 12 10:14:12.408859 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:14:12.411403 systemd-logind[1901]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:14:12.412899 systemd-logind[1901]: Removed session 23. Sep 12 10:14:17.436783 systemd[1]: Started sshd@23-172.31.20.240:22-147.75.109.163:55398.service - OpenSSH per-connection server daemon (147.75.109.163:55398). Sep 12 10:14:17.623796 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 55398 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:17.625721 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:17.630192 systemd-logind[1901]: New session 24 of user core. Sep 12 10:14:17.634620 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:14:17.841048 sshd[4988]: Connection closed by 147.75.109.163 port 55398 Sep 12 10:14:17.841989 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:17.845372 systemd[1]: sshd@23-172.31.20.240:22-147.75.109.163:55398.service: Deactivated successfully. Sep 12 10:14:17.848222 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:14:17.850880 systemd-logind[1901]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:14:17.852002 systemd-logind[1901]: Removed session 24. Sep 12 10:14:17.882840 systemd[1]: Started sshd@24-172.31.20.240:22-147.75.109.163:55414.service - OpenSSH per-connection server daemon (147.75.109.163:55414). Sep 12 10:14:18.047385 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 55414 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:18.048901 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:18.056119 systemd-logind[1901]: New session 25 of user core. Sep 12 10:14:18.059642 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:14:20.624481 containerd[1910]: time="2025-09-12T10:14:20.623895471Z" level=info msg="StopContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" with timeout 30 (s)" Sep 12 10:14:20.633060 containerd[1910]: time="2025-09-12T10:14:20.632493606Z" level=info msg="Stop container \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" with signal terminated" Sep 12 10:14:20.691256 systemd[1]: cri-containerd-0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa.scope: Deactivated successfully. Sep 12 10:14:20.697706 containerd[1910]: time="2025-09-12T10:14:20.697621120Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:14:20.713080 containerd[1910]: time="2025-09-12T10:14:20.713038921Z" level=info msg="StopContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" with timeout 2 (s)" Sep 12 10:14:20.713512 containerd[1910]: time="2025-09-12T10:14:20.713474916Z" level=info msg="Stop container \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" with signal terminated" Sep 12 10:14:20.728043 systemd-networkd[1833]: lxc_health: Link DOWN Sep 12 10:14:20.728054 systemd-networkd[1833]: lxc_health: Lost carrier Sep 12 10:14:20.744399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa-rootfs.mount: Deactivated successfully. Sep 12 10:14:20.755513 systemd[1]: cri-containerd-6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673.scope: Deactivated successfully. Sep 12 10:14:20.756061 systemd[1]: cri-containerd-6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673.scope: Consumed 8.162s CPU time, 196.1M memory peak, 75.3M read from disk, 13.3M written to disk. Sep 12 10:14:20.766096 containerd[1910]: time="2025-09-12T10:14:20.765949265Z" level=info msg="shim disconnected" id=0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa namespace=k8s.io Sep 12 10:14:20.766567 containerd[1910]: time="2025-09-12T10:14:20.766170639Z" level=warning msg="cleaning up after shim disconnected" id=0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa namespace=k8s.io Sep 12 10:14:20.766567 containerd[1910]: time="2025-09-12T10:14:20.766190959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:20.793250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673-rootfs.mount: Deactivated successfully. Sep 12 10:14:20.799563 containerd[1910]: time="2025-09-12T10:14:20.798748758Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:14:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:14:20.803036 containerd[1910]: time="2025-09-12T10:14:20.802931205Z" level=info msg="StopContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" returns successfully" Sep 12 10:14:20.808432 containerd[1910]: time="2025-09-12T10:14:20.808224607Z" level=info msg="shim disconnected" id=6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673 namespace=k8s.io Sep 12 10:14:20.808432 containerd[1910]: time="2025-09-12T10:14:20.808274382Z" level=warning msg="cleaning up after shim disconnected" id=6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673 namespace=k8s.io Sep 12 10:14:20.808432 containerd[1910]: time="2025-09-12T10:14:20.808281935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:20.820308 containerd[1910]: time="2025-09-12T10:14:20.820176827Z" level=info msg="StopPodSandbox for \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\"" Sep 12 10:14:20.822526 containerd[1910]: time="2025-09-12T10:14:20.822441223Z" level=info msg="Container to stop \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.828007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7-shm.mount: Deactivated successfully. Sep 12 10:14:20.842259 containerd[1910]: time="2025-09-12T10:14:20.842208985Z" level=info msg="StopContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" returns successfully" Sep 12 10:14:20.843184 containerd[1910]: time="2025-09-12T10:14:20.843156591Z" level=info msg="StopPodSandbox for \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\"" Sep 12 10:14:20.843474 systemd[1]: cri-containerd-e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7.scope: Deactivated successfully. Sep 12 10:14:20.845309 containerd[1910]: time="2025-09-12T10:14:20.845275011Z" level=info msg="Container to stop \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.845309 containerd[1910]: time="2025-09-12T10:14:20.845302619Z" level=info msg="Container to stop \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.845614 containerd[1910]: time="2025-09-12T10:14:20.845317034Z" level=info msg="Container to stop \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.845614 containerd[1910]: time="2025-09-12T10:14:20.845332447Z" level=info msg="Container to stop \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.845614 containerd[1910]: time="2025-09-12T10:14:20.845346684Z" level=info msg="Container to stop \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:14:20.848905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9-shm.mount: Deactivated successfully. Sep 12 10:14:20.860820 systemd[1]: cri-containerd-551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9.scope: Deactivated successfully. Sep 12 10:14:20.902283 containerd[1910]: time="2025-09-12T10:14:20.900985995Z" level=info msg="shim disconnected" id=551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9 namespace=k8s.io Sep 12 10:14:20.902283 containerd[1910]: time="2025-09-12T10:14:20.901066598Z" level=warning msg="cleaning up after shim disconnected" id=551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9 namespace=k8s.io Sep 12 10:14:20.902283 containerd[1910]: time="2025-09-12T10:14:20.901080437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:20.903127 containerd[1910]: time="2025-09-12T10:14:20.903072001Z" level=info msg="shim disconnected" id=e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7 namespace=k8s.io Sep 12 10:14:20.904543 containerd[1910]: time="2025-09-12T10:14:20.904508794Z" level=warning msg="cleaning up after shim disconnected" id=e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7 namespace=k8s.io Sep 12 10:14:20.904653 containerd[1910]: time="2025-09-12T10:14:20.904635287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:20.930602 containerd[1910]: time="2025-09-12T10:14:20.930551436Z" level=info msg="TearDown network for sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" successfully" Sep 12 10:14:20.930602 containerd[1910]: time="2025-09-12T10:14:20.930597909Z" level=info msg="StopPodSandbox for \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" returns successfully" Sep 12 10:14:20.943142 containerd[1910]: time="2025-09-12T10:14:20.943091618Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:14:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:14:20.944919 containerd[1910]: time="2025-09-12T10:14:20.944867778Z" level=info msg="TearDown network for sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" successfully" Sep 12 10:14:20.945452 containerd[1910]: time="2025-09-12T10:14:20.945422142Z" level=info msg="StopPodSandbox for \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" returns successfully" Sep 12 10:14:21.087512 kubelet[3205]: I0912 10:14:21.087383 3205 scope.go:117] "RemoveContainer" containerID="0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa" Sep 12 10:14:21.094097 containerd[1910]: time="2025-09-12T10:14:21.094033340Z" level=info msg="RemoveContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\"" Sep 12 10:14:21.098183 containerd[1910]: time="2025-09-12T10:14:21.098125726Z" level=info msg="RemoveContainer for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" returns successfully" Sep 12 10:14:21.098602 kubelet[3205]: I0912 10:14:21.098565 3205 scope.go:117] "RemoveContainer" containerID="0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa" Sep 12 10:14:21.098897 containerd[1910]: time="2025-09-12T10:14:21.098856597Z" level=error msg="ContainerStatus for \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\": not found" Sep 12 10:14:21.109373 kubelet[3205]: E0912 10:14:21.109295 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\": not found" containerID="0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa" Sep 12 10:14:21.110673 kubelet[3205]: I0912 10:14:21.110575 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa"} err="failed to get container status \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d2c52dd7f384ba2137b0a5a49d758508f139bb100b8059298dd80e2cca1b9fa\": not found" Sep 12 10:14:21.110673 kubelet[3205]: I0912 10:14:21.110666 3205 scope.go:117] "RemoveContainer" containerID="6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673" Sep 12 10:14:21.115241 containerd[1910]: time="2025-09-12T10:14:21.114727273Z" level=info msg="RemoveContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\"" Sep 12 10:14:21.118700 containerd[1910]: time="2025-09-12T10:14:21.118661670Z" level=info msg="RemoveContainer for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" returns successfully" Sep 12 10:14:21.118925 kubelet[3205]: I0912 10:14:21.118897 3205 scope.go:117] "RemoveContainer" containerID="f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587" Sep 12 10:14:21.119967 containerd[1910]: time="2025-09-12T10:14:21.119932997Z" level=info msg="RemoveContainer for \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\"" Sep 12 10:14:21.123439 containerd[1910]: time="2025-09-12T10:14:21.123404923Z" level=info msg="RemoveContainer for \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\" returns successfully" Sep 12 10:14:21.123878 kubelet[3205]: I0912 10:14:21.123571 3205 scope.go:117] "RemoveContainer" containerID="55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d" Sep 12 10:14:21.124504 containerd[1910]: time="2025-09-12T10:14:21.124485097Z" level=info msg="RemoveContainer for \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\"" Sep 12 10:14:21.128435 containerd[1910]: time="2025-09-12T10:14:21.128177476Z" level=info msg="RemoveContainer for \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\" returns successfully" Sep 12 10:14:21.128526 kubelet[3205]: I0912 10:14:21.128321 3205 scope.go:117] "RemoveContainer" containerID="db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67" Sep 12 10:14:21.129367 containerd[1910]: time="2025-09-12T10:14:21.129349127Z" level=info msg="RemoveContainer for \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\"" Sep 12 10:14:21.131468 kubelet[3205]: I0912 10:14:21.131400 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06063492-233b-494d-acd5-152ffdacab1c-cilium-config-path\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.133156 containerd[1910]: time="2025-09-12T10:14:21.133123865Z" level=info msg="RemoveContainer for \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\" returns successfully" Sep 12 10:14:21.139525 kubelet[3205]: I0912 10:14:21.131438 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-kernel\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139609 kubelet[3205]: I0912 10:14:21.139564 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2q5\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-kube-api-access-8x2q5\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139609 kubelet[3205]: I0912 10:14:21.139589 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3c811-486b-4e8a-81dc-aa84690fd247-cilium-config-path\") pod \"36a3c811-486b-4e8a-81dc-aa84690fd247\" (UID: \"36a3c811-486b-4e8a-81dc-aa84690fd247\") " Sep 12 10:14:21.139609 kubelet[3205]: I0912 10:14:21.139606 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-xtables-lock\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139623 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06063492-233b-494d-acd5-152ffdacab1c-clustermesh-secrets\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139639 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-hostproc\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139656 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-etc-cni-netd\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139673 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6kw\" (UniqueName: \"kubernetes.io/projected/36a3c811-486b-4e8a-81dc-aa84690fd247-kube-api-access-ks6kw\") pod \"36a3c811-486b-4e8a-81dc-aa84690fd247\" (UID: \"36a3c811-486b-4e8a-81dc-aa84690fd247\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139689 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-run\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139720 kubelet[3205]: I0912 10:14:21.139701 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-cgroup\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139911 kubelet[3205]: I0912 10:14:21.139718 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-hubble-tls\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139911 kubelet[3205]: I0912 10:14:21.139731 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-net\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139911 kubelet[3205]: I0912 10:14:21.139743 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-lib-modules\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139911 kubelet[3205]: I0912 10:14:21.139757 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-bpf-maps\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.139911 kubelet[3205]: I0912 10:14:21.139775 3205 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cni-path\") pod \"06063492-233b-494d-acd5-152ffdacab1c\" (UID: \"06063492-233b-494d-acd5-152ffdacab1c\") " Sep 12 10:14:21.142839 kubelet[3205]: I0912 10:14:21.140318 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cni-path" (OuterVolumeSpecName: "cni-path") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.142839 kubelet[3205]: I0912 10:14:21.142618 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.142839 kubelet[3205]: I0912 10:14:21.140317 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06063492-233b-494d-acd5-152ffdacab1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:14:21.142839 kubelet[3205]: I0912 10:14:21.142664 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.149701 kubelet[3205]: I0912 10:14:21.149652 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-kube-api-access-8x2q5" (OuterVolumeSpecName: "kube-api-access-8x2q5") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "kube-api-access-8x2q5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:14:21.151944 kubelet[3205]: I0912 10:14:21.151877 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a3c811-486b-4e8a-81dc-aa84690fd247-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36a3c811-486b-4e8a-81dc-aa84690fd247" (UID: "36a3c811-486b-4e8a-81dc-aa84690fd247"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:14:21.151944 kubelet[3205]: I0912 10:14:21.151934 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.156748 kubelet[3205]: I0912 10:14:21.156649 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06063492-233b-494d-acd5-152ffdacab1c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 10:14:21.156748 kubelet[3205]: I0912 10:14:21.156714 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-hostproc" (OuterVolumeSpecName: "hostproc") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.156958 kubelet[3205]: I0912 10:14:21.156826 3205 scope.go:117] "RemoveContainer" containerID="4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902" Sep 12 10:14:21.160228 kubelet[3205]: I0912 10:14:21.160116 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.160228 kubelet[3205]: I0912 10:14:21.160162 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.160228 kubelet[3205]: I0912 10:14:21.160178 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.161175 kubelet[3205]: I0912 10:14:21.160700 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a3c811-486b-4e8a-81dc-aa84690fd247-kube-api-access-ks6kw" (OuterVolumeSpecName: "kube-api-access-ks6kw") pod "36a3c811-486b-4e8a-81dc-aa84690fd247" (UID: "36a3c811-486b-4e8a-81dc-aa84690fd247"). InnerVolumeSpecName "kube-api-access-ks6kw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:14:21.161175 kubelet[3205]: I0912 10:14:21.160988 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.161175 kubelet[3205]: I0912 10:14:21.161010 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:14:21.161647 containerd[1910]: time="2025-09-12T10:14:21.161502192Z" level=info msg="RemoveContainer for \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\"" Sep 12 10:14:21.162972 kubelet[3205]: I0912 10:14:21.162942 3205 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "06063492-233b-494d-acd5-152ffdacab1c" (UID: "06063492-233b-494d-acd5-152ffdacab1c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:14:21.175414 containerd[1910]: time="2025-09-12T10:14:21.175369906Z" level=info msg="RemoveContainer for \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\" returns successfully" Sep 12 10:14:21.175866 kubelet[3205]: I0912 10:14:21.175603 3205 scope.go:117] "RemoveContainer" containerID="6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673" Sep 12 10:14:21.175948 containerd[1910]: time="2025-09-12T10:14:21.175807329Z" level=error msg="ContainerStatus for \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\": not found" Sep 12 10:14:21.175985 kubelet[3205]: E0912 10:14:21.175933 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\": not found" containerID="6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673" Sep 12 10:14:21.175985 kubelet[3205]: I0912 10:14:21.175969 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673"} err="failed to get container status \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d54d257e13fc0261e5fd16f86f4e511dcceeb2df39d0d6f64ca677af9bd5673\": not found" Sep 12 10:14:21.176045 kubelet[3205]: I0912 10:14:21.175988 3205 scope.go:117] "RemoveContainer" containerID="f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587" Sep 12 10:14:21.176250 containerd[1910]: time="2025-09-12T10:14:21.176186901Z" level=error msg="ContainerStatus for \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\": not found" Sep 12 10:14:21.176293 kubelet[3205]: E0912 10:14:21.176280 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\": not found" containerID="f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587" Sep 12 10:14:21.176326 kubelet[3205]: I0912 10:14:21.176297 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587"} err="failed to get container status \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7ad4ba2e57a26b42e676570517e321162e60a1ef589aedbc1c4a657892e8587\": not found" Sep 12 10:14:21.176326 kubelet[3205]: I0912 10:14:21.176314 3205 scope.go:117] "RemoveContainer" containerID="55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d" Sep 12 10:14:21.176520 containerd[1910]: time="2025-09-12T10:14:21.176478273Z" level=error msg="ContainerStatus for \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\": not found" Sep 12 10:14:21.176714 kubelet[3205]: E0912 10:14:21.176587 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\": not found" containerID="55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d" Sep 12 10:14:21.176714 kubelet[3205]: I0912 10:14:21.176605 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d"} err="failed to get container status \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\": rpc error: code = NotFound desc = an error occurred when try to find container \"55e52a0d271f60ff9c26f522995769f8a2d03494d6a552d532fd8166f631594d\": not found" Sep 12 10:14:21.176714 kubelet[3205]: I0912 10:14:21.176621 3205 scope.go:117] "RemoveContainer" containerID="db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67" Sep 12 10:14:21.176915 containerd[1910]: time="2025-09-12T10:14:21.176884918Z" level=error msg="ContainerStatus for \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\": not found" Sep 12 10:14:21.177001 kubelet[3205]: E0912 10:14:21.176985 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\": not found" containerID="db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67" Sep 12 10:14:21.177039 kubelet[3205]: I0912 10:14:21.177003 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67"} err="failed to get container status \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\": rpc error: code = NotFound desc = an error occurred when try to find container \"db7496bc7c8d2568f22e0c6fc81b9334c923d43660be1f96068648eac0e02d67\": not found" Sep 12 10:14:21.177039 kubelet[3205]: I0912 10:14:21.177016 3205 scope.go:117] "RemoveContainer" containerID="4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902" Sep 12 10:14:21.177156 containerd[1910]: time="2025-09-12T10:14:21.177127724Z" level=error msg="ContainerStatus for \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\": not found" Sep 12 10:14:21.177290 kubelet[3205]: E0912 10:14:21.177242 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\": not found" containerID="4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902" Sep 12 10:14:21.177703 kubelet[3205]: I0912 10:14:21.177669 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902"} err="failed to get container status \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0783217528161d9fdb0aac5eec535094cc66a601b6493cca7fc181a9f5902\": not found" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240528 3205 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06063492-233b-494d-acd5-152ffdacab1c-clustermesh-secrets\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240567 3205 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-hostproc\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240580 3205 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-etc-cni-netd\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240590 3205 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6kw\" (UniqueName: \"kubernetes.io/projected/36a3c811-486b-4e8a-81dc-aa84690fd247-kube-api-access-ks6kw\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240599 3205 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-run\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240609 3205 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cilium-cgroup\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240569 kubelet[3205]: I0912 10:14:21.240617 3205 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-hubble-tls\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240627 3205 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-net\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240635 3205 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-lib-modules\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240643 3205 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-bpf-maps\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240652 3205 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-cni-path\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240659 3205 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06063492-233b-494d-acd5-152ffdacab1c-cilium-config-path\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240669 3205 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-host-proc-sys-kernel\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240676 3205 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8x2q5\" (UniqueName: \"kubernetes.io/projected/06063492-233b-494d-acd5-152ffdacab1c-kube-api-access-8x2q5\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.240982 kubelet[3205]: I0912 10:14:21.240688 3205 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3c811-486b-4e8a-81dc-aa84690fd247-cilium-config-path\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.241176 kubelet[3205]: I0912 10:14:21.240696 3205 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06063492-233b-494d-acd5-152ffdacab1c-xtables-lock\") on node \"ip-172-31-20-240\" DevicePath \"\"" Sep 12 10:14:21.381027 systemd[1]: Removed slice kubepods-besteffort-pod36a3c811_486b_4e8a_81dc_aa84690fd247.slice - libcontainer container kubepods-besteffort-pod36a3c811_486b_4e8a_81dc_aa84690fd247.slice. Sep 12 10:14:21.411599 systemd[1]: Removed slice kubepods-burstable-pod06063492_233b_494d_acd5_152ffdacab1c.slice - libcontainer container kubepods-burstable-pod06063492_233b_494d_acd5_152ffdacab1c.slice. Sep 12 10:14:21.411929 systemd[1]: kubepods-burstable-pod06063492_233b_494d_acd5_152ffdacab1c.slice: Consumed 8.263s CPU time, 196.5M memory peak, 75.3M read from disk, 13.3M written to disk. Sep 12 10:14:21.648343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7-rootfs.mount: Deactivated successfully. Sep 12 10:14:21.648487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9-rootfs.mount: Deactivated successfully. Sep 12 10:14:21.648550 systemd[1]: var-lib-kubelet-pods-36a3c811\x2d486b\x2d4e8a\x2d81dc\x2daa84690fd247-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dks6kw.mount: Deactivated successfully. Sep 12 10:14:21.648621 systemd[1]: var-lib-kubelet-pods-06063492\x2d233b\x2d494d\x2dacd5\x2d152ffdacab1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8x2q5.mount: Deactivated successfully. Sep 12 10:14:21.648687 systemd[1]: var-lib-kubelet-pods-06063492\x2d233b\x2d494d\x2dacd5\x2d152ffdacab1c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:14:21.648760 systemd[1]: var-lib-kubelet-pods-06063492\x2d233b\x2d494d\x2dacd5\x2d152ffdacab1c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:14:22.444371 sshd[5002]: Connection closed by 147.75.109.163 port 55414 Sep 12 10:14:22.445235 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:22.448523 systemd[1]: sshd@24-172.31.20.240:22-147.75.109.163:55414.service: Deactivated successfully. Sep 12 10:14:22.450631 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:14:22.452044 systemd-logind[1901]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:14:22.453509 systemd-logind[1901]: Removed session 25. Sep 12 10:14:22.476654 systemd[1]: Started sshd@25-172.31.20.240:22-147.75.109.163:50152.service - OpenSSH per-connection server daemon (147.75.109.163:50152). Sep 12 10:14:22.662322 sshd[5162]: Accepted publickey for core from 147.75.109.163 port 50152 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:22.663908 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:22.670084 systemd-logind[1901]: New session 26 of user core. Sep 12 10:14:22.679776 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:14:22.775575 kubelet[3205]: I0912 10:14:22.775442 3205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06063492-233b-494d-acd5-152ffdacab1c" path="/var/lib/kubelet/pods/06063492-233b-494d-acd5-152ffdacab1c/volumes" Sep 12 10:14:22.776208 kubelet[3205]: I0912 10:14:22.776050 3205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a3c811-486b-4e8a-81dc-aa84690fd247" path="/var/lib/kubelet/pods/36a3c811-486b-4e8a-81dc-aa84690fd247/volumes" Sep 12 10:14:23.385823 ntpd[1886]: Deleting interface #12 lxc_health, fe80::708d:c3ff:feeb:a53f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Sep 12 10:14:23.386319 ntpd[1886]: 12 Sep 10:14:23 ntpd[1886]: Deleting interface #12 lxc_health, fe80::708d:c3ff:feeb:a53f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Sep 12 10:14:23.593481 sshd[5166]: Connection closed by 147.75.109.163 port 50152 Sep 12 10:14:23.595709 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:23.601734 systemd[1]: sshd@25-172.31.20.240:22-147.75.109.163:50152.service: Deactivated successfully. Sep 12 10:14:23.605820 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:14:23.608169 systemd-logind[1901]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:14:23.611080 systemd-logind[1901]: Removed session 26. Sep 12 10:14:23.637604 systemd[1]: Started sshd@26-172.31.20.240:22-147.75.109.163:50162.service - OpenSSH per-connection server daemon (147.75.109.163:50162). Sep 12 10:14:23.702081 systemd[1]: Created slice kubepods-burstable-podac10f2ba_bef2_4c88_893b_59bd8f4949c3.slice - libcontainer container kubepods-burstable-podac10f2ba_bef2_4c88_893b_59bd8f4949c3.slice. Sep 12 10:14:23.793507 kubelet[3205]: I0912 10:14:23.793469 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfm2\" (UniqueName: \"kubernetes.io/projected/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-kube-api-access-bcfm2\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794050 kubelet[3205]: I0912 10:14:23.794014 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-bpf-maps\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794050 kubelet[3205]: I0912 10:14:23.794051 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-hubble-tls\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794070 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-cilium-config-path\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794087 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-cni-path\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794139 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-cilium-cgroup\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794168 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-xtables-lock\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794186 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-cilium-ipsec-secrets\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794240 kubelet[3205]: I0912 10:14:23.794201 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-hostproc\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794216 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-etc-cni-netd\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794232 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-lib-modules\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794245 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-host-proc-sys-net\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794263 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-clustermesh-secrets\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794288 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-cilium-run\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.794446 kubelet[3205]: I0912 10:14:23.794304 3205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac10f2ba-bef2-4c88-893b-59bd8f4949c3-host-proc-sys-kernel\") pod \"cilium-ss2gg\" (UID: \"ac10f2ba-bef2-4c88-893b-59bd8f4949c3\") " pod="kube-system/cilium-ss2gg" Sep 12 10:14:23.832401 sshd[5177]: Accepted publickey for core from 147.75.109.163 port 50162 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:23.833857 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:23.839095 systemd-logind[1901]: New session 27 of user core. Sep 12 10:14:23.844686 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:14:23.966012 sshd[5179]: Connection closed by 147.75.109.163 port 50162 Sep 12 10:14:23.967108 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:23.971137 systemd[1]: sshd@26-172.31.20.240:22-147.75.109.163:50162.service: Deactivated successfully. Sep 12 10:14:23.973500 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:14:23.974729 systemd-logind[1901]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:14:23.975723 systemd-logind[1901]: Removed session 27. Sep 12 10:14:24.002006 systemd[1]: Started sshd@27-172.31.20.240:22-147.75.109.163:50178.service - OpenSSH per-connection server daemon (147.75.109.163:50178). Sep 12 10:14:24.022808 containerd[1910]: time="2025-09-12T10:14:24.022377735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ss2gg,Uid:ac10f2ba-bef2-4c88-893b-59bd8f4949c3,Namespace:kube-system,Attempt:0,}" Sep 12 10:14:24.050908 containerd[1910]: time="2025-09-12T10:14:24.050759823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:14:24.050908 containerd[1910]: time="2025-09-12T10:14:24.050831354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:14:24.050908 containerd[1910]: time="2025-09-12T10:14:24.050856344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:14:24.051216 containerd[1910]: time="2025-09-12T10:14:24.050959715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:14:24.087700 systemd[1]: Started cri-containerd-5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39.scope - libcontainer container 5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39. Sep 12 10:14:24.115270 containerd[1910]: time="2025-09-12T10:14:24.115214820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ss2gg,Uid:ac10f2ba-bef2-4c88-893b-59bd8f4949c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\"" Sep 12 10:14:24.142478 containerd[1910]: time="2025-09-12T10:14:24.142426112Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:14:24.155127 containerd[1910]: time="2025-09-12T10:14:24.155082021Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d\"" Sep 12 10:14:24.156493 containerd[1910]: time="2025-09-12T10:14:24.155746299Z" level=info msg="StartContainer for \"514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d\"" Sep 12 10:14:24.176335 sshd[5191]: Accepted publickey for core from 147.75.109.163 port 50178 ssh2: RSA SHA256:LGvzQL2PrQ7V8r/aQI9Nmd1Kmv0z99Qz/jAyhplIR5E Sep 12 10:14:24.178338 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:14:24.181726 systemd[1]: Started cri-containerd-514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d.scope - libcontainer container 514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d. Sep 12 10:14:24.190528 systemd-logind[1901]: New session 28 of user core. Sep 12 10:14:24.197741 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 10:14:24.220368 containerd[1910]: time="2025-09-12T10:14:24.220263854Z" level=info msg="StartContainer for \"514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d\" returns successfully" Sep 12 10:14:24.306213 systemd[1]: cri-containerd-514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d.scope: Deactivated successfully. Sep 12 10:14:24.306850 systemd[1]: cri-containerd-514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d.scope: Consumed 25ms CPU time, 9.5M memory peak, 3.2M read from disk. Sep 12 10:14:24.350539 containerd[1910]: time="2025-09-12T10:14:24.350167741Z" level=info msg="shim disconnected" id=514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d namespace=k8s.io Sep 12 10:14:24.350828 containerd[1910]: time="2025-09-12T10:14:24.350533294Z" level=warning msg="cleaning up after shim disconnected" id=514cb96accb82a63761f17b9f03476a5d8ca5cc628b614a469d60bf9463bdb4d namespace=k8s.io Sep 12 10:14:24.350828 containerd[1910]: time="2025-09-12T10:14:24.350819657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:24.892742 kubelet[3205]: E0912 10:14:24.892696 3205 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:14:25.106519 containerd[1910]: time="2025-09-12T10:14:25.106196514Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:14:25.125310 containerd[1910]: time="2025-09-12T10:14:25.125261120Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d\"" Sep 12 10:14:25.126631 containerd[1910]: time="2025-09-12T10:14:25.126595397Z" level=info msg="StartContainer for \"977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d\"" Sep 12 10:14:25.169741 systemd[1]: run-containerd-runc-k8s.io-977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d-runc.mgSyy3.mount: Deactivated successfully. Sep 12 10:14:25.178654 systemd[1]: Started cri-containerd-977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d.scope - libcontainer container 977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d. Sep 12 10:14:25.223816 containerd[1910]: time="2025-09-12T10:14:25.223772416Z" level=info msg="StartContainer for \"977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d\" returns successfully" Sep 12 10:14:25.236639 systemd[1]: cri-containerd-977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d.scope: Deactivated successfully. Sep 12 10:14:25.237150 systemd[1]: cri-containerd-977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d.scope: Consumed 20ms CPU time, 7.4M memory peak, 2.1M read from disk. Sep 12 10:14:25.268656 containerd[1910]: time="2025-09-12T10:14:25.268592369Z" level=info msg="shim disconnected" id=977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d namespace=k8s.io Sep 12 10:14:25.268656 containerd[1910]: time="2025-09-12T10:14:25.268641743Z" level=warning msg="cleaning up after shim disconnected" id=977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d namespace=k8s.io Sep 12 10:14:25.268656 containerd[1910]: time="2025-09-12T10:14:25.268652179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:25.899791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-977f0158581476aa409a1777ee5c0462abf6f4c4c8fd309cc4d526b4f19be61d-rootfs.mount: Deactivated successfully. Sep 12 10:14:26.111492 containerd[1910]: time="2025-09-12T10:14:26.111378838Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:14:26.140051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198359748.mount: Deactivated successfully. Sep 12 10:14:26.154327 containerd[1910]: time="2025-09-12T10:14:26.154202599Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea\"" Sep 12 10:14:26.159722 containerd[1910]: time="2025-09-12T10:14:26.159670406Z" level=info msg="StartContainer for \"0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea\"" Sep 12 10:14:26.206674 systemd[1]: Started cri-containerd-0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea.scope - libcontainer container 0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea. Sep 12 10:14:26.258206 containerd[1910]: time="2025-09-12T10:14:26.258160176Z" level=info msg="StartContainer for \"0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea\" returns successfully" Sep 12 10:14:26.390105 systemd[1]: cri-containerd-0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea.scope: Deactivated successfully. Sep 12 10:14:26.426753 containerd[1910]: time="2025-09-12T10:14:26.426270045Z" level=info msg="shim disconnected" id=0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea namespace=k8s.io Sep 12 10:14:26.426753 containerd[1910]: time="2025-09-12T10:14:26.426363029Z" level=warning msg="cleaning up after shim disconnected" id=0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea namespace=k8s.io Sep 12 10:14:26.426753 containerd[1910]: time="2025-09-12T10:14:26.426373978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:26.859962 kubelet[3205]: I0912 10:14:26.859901 3205 setters.go:618] "Node became not ready" node="ip-172-31-20-240" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:14:26Z","lastTransitionTime":"2025-09-12T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:14:26.899845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aa6705db272f1d9945726e7bb85c751b611926fe3d3fc33b5f64741eb6108ea-rootfs.mount: Deactivated successfully. Sep 12 10:14:27.111559 containerd[1910]: time="2025-09-12T10:14:27.111431635Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:14:27.129270 containerd[1910]: time="2025-09-12T10:14:27.129221735Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512\"" Sep 12 10:14:27.130945 containerd[1910]: time="2025-09-12T10:14:27.130908359Z" level=info msg="StartContainer for \"b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512\"" Sep 12 10:14:27.172673 systemd[1]: Started cri-containerd-b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512.scope - libcontainer container b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512. Sep 12 10:14:27.202689 systemd[1]: cri-containerd-b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512.scope: Deactivated successfully. Sep 12 10:14:27.204746 containerd[1910]: time="2025-09-12T10:14:27.204704545Z" level=info msg="StartContainer for \"b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512\" returns successfully" Sep 12 10:14:27.238314 containerd[1910]: time="2025-09-12T10:14:27.238257146Z" level=info msg="shim disconnected" id=b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512 namespace=k8s.io Sep 12 10:14:27.238314 containerd[1910]: time="2025-09-12T10:14:27.238306493Z" level=warning msg="cleaning up after shim disconnected" id=b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512 namespace=k8s.io Sep 12 10:14:27.238314 containerd[1910]: time="2025-09-12T10:14:27.238315532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:27.899917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0c7b08b99238e6a5a5aa2a99e74bce939e964e64ed4778f5b3f92acdcf7f512-rootfs.mount: Deactivated successfully. Sep 12 10:14:28.118309 containerd[1910]: time="2025-09-12T10:14:28.118259375Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:14:28.139838 containerd[1910]: time="2025-09-12T10:14:28.139363054Z" level=info msg="CreateContainer within sandbox \"5c0d1df6f24f0f737dfdf0dd3d752a7f08018aeaa18a4f140e3ba854f49c3e39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a\"" Sep 12 10:14:28.142499 containerd[1910]: time="2025-09-12T10:14:28.142228203Z" level=info msg="StartContainer for \"5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a\"" Sep 12 10:14:28.206504 systemd[1]: Started cri-containerd-5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a.scope - libcontainer container 5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a. Sep 12 10:14:28.298490 containerd[1910]: time="2025-09-12T10:14:28.297113231Z" level=info msg="StartContainer for \"5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a\" returns successfully" Sep 12 10:14:28.900039 systemd[1]: run-containerd-runc-k8s.io-5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a-runc.OjldtE.mount: Deactivated successfully. Sep 12 10:14:29.116515 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:14:29.135338 kubelet[3205]: I0912 10:14:29.135288 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ss2gg" podStartSLOduration=6.135273453 podStartE2EDuration="6.135273453s" podCreationTimestamp="2025-09-12 10:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:14:29.13497796 +0000 UTC m=+104.571644975" watchObservedRunningTime="2025-09-12 10:14:29.135273453 +0000 UTC m=+104.571940468" Sep 12 10:14:32.176863 (udev-worker)[6031]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:14:32.182826 systemd-networkd[1833]: lxc_health: Link UP Sep 12 10:14:32.190903 systemd-networkd[1833]: lxc_health: Gained carrier Sep 12 10:14:32.191515 (udev-worker)[6033]: Network interface NamePolicy= disabled on kernel command line. Sep 12 10:14:33.441710 systemd-networkd[1833]: lxc_health: Gained IPv6LL Sep 12 10:14:35.616973 systemd[1]: run-containerd-runc-k8s.io-5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a-runc.FZZRup.mount: Deactivated successfully. Sep 12 10:14:35.684687 kubelet[3205]: E0912 10:14:35.683962 3205 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58366->127.0.0.1:45355: write tcp 127.0.0.1:58366->127.0.0.1:45355: write: broken pipe Sep 12 10:14:36.385918 ntpd[1886]: Listen normally on 15 lxc_health [fe80::a875:92ff:fec2:9d1f%14]:123 Sep 12 10:14:36.386476 ntpd[1886]: 12 Sep 10:14:36 ntpd[1886]: Listen normally on 15 lxc_health [fe80::a875:92ff:fec2:9d1f%14]:123 Sep 12 10:14:37.794247 systemd[1]: run-containerd-runc-k8s.io-5ba1b3d5a17b0729e605849e5de3cfd1799294ebd097ebed507838536f10d15a-runc.mimjqu.mount: Deactivated successfully. Sep 12 10:14:37.854210 kubelet[3205]: E0912 10:14:37.854173 3205 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51228->127.0.0.1:45355: write tcp 127.0.0.1:51228->127.0.0.1:45355: write: broken pipe Sep 12 10:14:37.877865 sshd[5258]: Connection closed by 147.75.109.163 port 50178 Sep 12 10:14:37.879699 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Sep 12 10:14:37.886181 systemd[1]: sshd@27-172.31.20.240:22-147.75.109.163:50178.service: Deactivated successfully. Sep 12 10:14:37.888310 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 10:14:37.890381 systemd-logind[1901]: Session 28 logged out. Waiting for processes to exit. Sep 12 10:14:37.891684 systemd-logind[1901]: Removed session 28. Sep 12 10:14:44.757064 containerd[1910]: time="2025-09-12T10:14:44.757023264Z" level=info msg="StopPodSandbox for \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\"" Sep 12 10:14:44.757880 containerd[1910]: time="2025-09-12T10:14:44.757734341Z" level=info msg="TearDown network for sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" successfully" Sep 12 10:14:44.757880 containerd[1910]: time="2025-09-12T10:14:44.757763549Z" level=info msg="StopPodSandbox for \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" returns successfully" Sep 12 10:14:44.758242 containerd[1910]: time="2025-09-12T10:14:44.758210607Z" level=info msg="RemovePodSandbox for \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\"" Sep 12 10:14:44.758242 containerd[1910]: time="2025-09-12T10:14:44.758239377Z" level=info msg="Forcibly stopping sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\"" Sep 12 10:14:44.758386 containerd[1910]: time="2025-09-12T10:14:44.758304268Z" level=info msg="TearDown network for sandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" successfully" Sep 12 10:14:44.766561 containerd[1910]: time="2025-09-12T10:14:44.766508594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 10:14:44.766710 containerd[1910]: time="2025-09-12T10:14:44.766589135Z" level=info msg="RemovePodSandbox \"551193c8798ac6b0b2eb693a310f585b98075d9a683b41ca56f106e90e5392e9\" returns successfully" Sep 12 10:14:44.784838 containerd[1910]: time="2025-09-12T10:14:44.784801264Z" level=info msg="StopPodSandbox for \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\"" Sep 12 10:14:44.784969 containerd[1910]: time="2025-09-12T10:14:44.784893013Z" level=info msg="TearDown network for sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" successfully" Sep 12 10:14:44.784969 containerd[1910]: time="2025-09-12T10:14:44.784914770Z" level=info msg="StopPodSandbox for \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" returns successfully" Sep 12 10:14:44.785284 containerd[1910]: time="2025-09-12T10:14:44.785257031Z" level=info msg="RemovePodSandbox for \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\"" Sep 12 10:14:44.785284 containerd[1910]: time="2025-09-12T10:14:44.785283176Z" level=info msg="Forcibly stopping sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\"" Sep 12 10:14:44.785496 containerd[1910]: time="2025-09-12T10:14:44.785341202Z" level=info msg="TearDown network for sandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" successfully" Sep 12 10:14:44.789269 containerd[1910]: time="2025-09-12T10:14:44.789145263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 10:14:44.789269 containerd[1910]: time="2025-09-12T10:14:44.789197752Z" level=info msg="RemovePodSandbox \"e51f9d7615979cec9b3b0e56a90fc3a84e1f9679d1949f0ba45c99f975c168f7\" returns successfully" Sep 12 10:14:57.102723 systemd[1]: cri-containerd-900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3.scope: Deactivated successfully. Sep 12 10:14:57.103014 systemd[1]: cri-containerd-900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3.scope: Consumed 2.852s CPU time, 91.6M memory peak, 48.1M read from disk. Sep 12 10:14:57.131888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3-rootfs.mount: Deactivated successfully. Sep 12 10:14:57.151488 containerd[1910]: time="2025-09-12T10:14:57.151396263Z" level=info msg="shim disconnected" id=900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3 namespace=k8s.io Sep 12 10:14:57.151488 containerd[1910]: time="2025-09-12T10:14:57.151448188Z" level=warning msg="cleaning up after shim disconnected" id=900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3 namespace=k8s.io Sep 12 10:14:57.151488 containerd[1910]: time="2025-09-12T10:14:57.151488587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:14:57.471413 kubelet[3205]: E0912 10:14:57.471260 3205 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": context deadline exceeded" Sep 12 10:14:58.181859 kubelet[3205]: I0912 10:14:58.181828 3205 scope.go:117] "RemoveContainer" containerID="900a1d5e40bcfca558d0c4224c69a0905d6c13aac9cff061552f41fa1b1c18b3" Sep 12 10:14:58.184246 containerd[1910]: time="2025-09-12T10:14:58.184209745Z" level=info msg="CreateContainer within sandbox \"4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 10:14:58.201967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367076124.mount: Deactivated successfully. Sep 12 10:14:58.205994 containerd[1910]: time="2025-09-12T10:14:58.205950371Z" level=info msg="CreateContainer within sandbox \"4bd392f24f9d357af3637301c5b55f81a8f6b39cb34ff4d851bd7dc7f7ac9384\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"246240b0dd3607babe7fc259bd1834d247f3dbc7361bdfe5e7ff369519379b3b\"" Sep 12 10:14:58.206575 containerd[1910]: time="2025-09-12T10:14:58.206549390Z" level=info msg="StartContainer for \"246240b0dd3607babe7fc259bd1834d247f3dbc7361bdfe5e7ff369519379b3b\"" Sep 12 10:14:58.244684 systemd[1]: Started cri-containerd-246240b0dd3607babe7fc259bd1834d247f3dbc7361bdfe5e7ff369519379b3b.scope - libcontainer container 246240b0dd3607babe7fc259bd1834d247f3dbc7361bdfe5e7ff369519379b3b. Sep 12 10:14:58.294966 containerd[1910]: time="2025-09-12T10:14:58.294919725Z" level=info msg="StartContainer for \"246240b0dd3607babe7fc259bd1834d247f3dbc7361bdfe5e7ff369519379b3b\" returns successfully" Sep 12 10:15:01.057066 systemd[1]: cri-containerd-c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d.scope: Deactivated successfully. Sep 12 10:15:01.058076 systemd[1]: cri-containerd-c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d.scope: Consumed 3.126s CPU time, 28.1M memory peak, 13M read from disk. Sep 12 10:15:01.088714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d-rootfs.mount: Deactivated successfully. Sep 12 10:15:01.134971 containerd[1910]: time="2025-09-12T10:15:01.133950904Z" level=info msg="shim disconnected" id=c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d namespace=k8s.io Sep 12 10:15:01.138108 containerd[1910]: time="2025-09-12T10:15:01.134998290Z" level=warning msg="cleaning up after shim disconnected" id=c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d namespace=k8s.io Sep 12 10:15:01.138108 containerd[1910]: time="2025-09-12T10:15:01.135043812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:15:01.228949 containerd[1910]: time="2025-09-12T10:15:01.228897870Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:15:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:15:02.222225 kubelet[3205]: I0912 10:15:02.222183 3205 scope.go:117] "RemoveContainer" containerID="c9a6584e655d41eb7807f549df77b08f5625498365262d43cbf2671797d24a0d" Sep 12 10:15:02.234145 containerd[1910]: time="2025-09-12T10:15:02.234092582Z" level=info msg="CreateContainer within sandbox \"524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 10:15:02.363757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411137876.mount: Deactivated successfully. Sep 12 10:15:02.378370 containerd[1910]: time="2025-09-12T10:15:02.378181373Z" level=info msg="CreateContainer within sandbox \"524ea9d8009e7d95668fa42efff141e48751cff5118a86c33837c5eeb81e3a6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f\"" Sep 12 10:15:02.382100 containerd[1910]: time="2025-09-12T10:15:02.381995149Z" level=info msg="StartContainer for \"c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f\"" Sep 12 10:15:02.574665 systemd[1]: run-containerd-runc-k8s.io-c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f-runc.xaUaDe.mount: Deactivated successfully. Sep 12 10:15:02.618980 systemd[1]: Started cri-containerd-c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f.scope - libcontainer container c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f. Sep 12 10:15:02.870070 containerd[1910]: time="2025-09-12T10:15:02.870009659Z" level=info msg="StartContainer for \"c25606dee742b4bb63a676cc8723082989c74f516aa1f2619127b225fe9a945f\" returns successfully" Sep 12 10:15:07.474541 kubelet[3205]: E0912 10:15:07.474419 3205 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-240?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"