Sep 12 17:49:22.909428 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:49:22.909466 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:49:22.909482 kernel: BIOS-provided physical RAM map: Sep 12 17:49:22.909494 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:49:22.909503 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:49:22.909514 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:49:22.909528 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:49:22.909541 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:49:22.909557 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:49:22.909570 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:49:22.909582 kernel: NX (Execute Disable) protection: active Sep 12 17:49:22.909594 kernel: APIC: Static calls initialized Sep 12 17:49:22.909605 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:49:22.909617 kernel: extended physical RAM map: Sep 12 17:49:22.909633 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:49:22.909646 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 17:49:22.909659 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 17:49:22.909670 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 17:49:22.909684 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:49:22.909695 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:49:22.909708 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:49:22.909720 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:49:22.909731 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:49:22.909743 kernel: efi: EFI v2.7 by EDK II Sep 12 17:49:22.909758 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:49:22.909771 kernel: secureboot: Secure boot disabled Sep 12 17:49:22.909783 kernel: SMBIOS 2.7 present. Sep 12 17:49:22.909796 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:49:22.909809 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:49:22.909823 kernel: Hypervisor detected: KVM Sep 12 17:49:22.909837 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:49:22.909852 kernel: kvm-clock: using sched offset of 5097151233 cycles Sep 12 17:49:22.909867 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:49:22.909881 kernel: tsc: Detected 2499.998 MHz processor Sep 12 17:49:22.909896 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:49:22.909915 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:49:22.909928 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:49:22.909944 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:49:22.909960 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:49:22.909975 kernel: Using GB pages for direct mapping Sep 12 17:49:22.909995 kernel: ACPI: Early table checksum verification disabled Sep 12 17:49:22.910013 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:49:22.910028 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:49:22.912094 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:49:22.912111 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:49:22.912125 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:49:22.912139 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:49:22.912152 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:49:22.912165 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:49:22.912184 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:49:22.912197 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:49:22.912210 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:49:22.912223 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:49:22.912236 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:49:22.912249 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:49:22.912263 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:49:22.912276 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:49:22.912291 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:49:22.912304 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:49:22.912317 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:49:22.912330 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:49:22.912343 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:49:22.912356 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:49:22.912369 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:49:22.912383 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:49:22.912396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:49:22.912409 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:49:22.912425 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 12 17:49:22.912438 kernel: Zone ranges: Sep 12 17:49:22.912451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:49:22.912464 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:49:22.912477 kernel: Normal empty Sep 12 17:49:22.912490 kernel: Device empty Sep 12 17:49:22.912502 kernel: Movable zone start for each node Sep 12 17:49:22.912515 kernel: Early memory node ranges Sep 12 17:49:22.912528 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:49:22.912544 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:49:22.912557 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:49:22.912570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:49:22.912583 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:49:22.912596 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:49:22.912609 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:49:22.912623 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:49:22.912636 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:49:22.912649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:49:22.912663 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:49:22.912678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:49:22.912691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:49:22.912705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:49:22.912718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:49:22.912731 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:49:22.912743 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:49:22.912756 kernel: TSC deadline timer available Sep 12 17:49:22.912770 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:49:22.912783 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:49:22.912798 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:49:22.912811 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:49:22.912822 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:49:22.912834 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:49:22.912845 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:49:22.912858 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:49:22.912872 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:49:22.912886 kernel: Booting paravirtualized kernel on KVM Sep 12 17:49:22.912901 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:49:22.912918 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:49:22.912933 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:49:22.912947 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:49:22.912961 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:49:22.912976 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:49:22.912991 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:49:22.913009 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:49:22.913024 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:49:22.913057 kernel: random: crng init done Sep 12 17:49:22.913071 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:49:22.913087 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:49:22.913102 kernel: Fallback order for Node 0: 0 Sep 12 17:49:22.913116 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 12 17:49:22.913131 kernel: Policy zone: DMA32 Sep 12 17:49:22.913157 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:49:22.913176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:49:22.913192 kernel: Kernel/User page tables isolation: enabled Sep 12 17:49:22.913207 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:49:22.913223 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:49:22.913239 kernel: Dynamic Preempt: voluntary Sep 12 17:49:22.913257 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:49:22.913274 kernel: rcu: RCU event tracing is enabled. Sep 12 17:49:22.913289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:49:22.913306 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:49:22.913322 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:49:22.913341 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:49:22.913357 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:49:22.913372 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:49:22.913388 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:49:22.913404 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:49:22.913420 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:49:22.913436 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:49:22.913451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:49:22.913470 kernel: Console: colour dummy device 80x25 Sep 12 17:49:22.913485 kernel: printk: legacy console [tty0] enabled Sep 12 17:49:22.913501 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:49:22.913516 kernel: ACPI: Core revision 20240827 Sep 12 17:49:22.913532 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:49:22.913549 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:49:22.913564 kernel: x2apic enabled Sep 12 17:49:22.913580 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:49:22.913596 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:49:22.913612 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 12 17:49:22.913631 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:49:22.913647 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:49:22.913663 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:49:22.913678 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:49:22.913694 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:49:22.913709 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:49:22.913725 kernel: RETBleed: Vulnerable Sep 12 17:49:22.913741 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:49:22.913757 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:49:22.913772 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:49:22.913790 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:49:22.913805 kernel: active return thunk: its_return_thunk Sep 12 17:49:22.913821 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:49:22.913836 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:49:22.913852 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:49:22.913868 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:49:22.913883 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:49:22.913898 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:49:22.913914 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:49:22.913929 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:49:22.913944 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:49:22.913962 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:49:22.913978 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:49:22.913992 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:49:22.914007 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:49:22.914022 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:49:22.915055 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:49:22.915079 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:49:22.915095 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:49:22.915110 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:49:22.915127 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:49:22.915142 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:49:22.915163 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:49:22.915178 kernel: landlock: Up and running. Sep 12 17:49:22.915194 kernel: SELinux: Initializing. Sep 12 17:49:22.915208 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:49:22.915223 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:49:22.915242 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:49:22.915263 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:49:22.915281 kernel: signal: max sigframe size: 3632 Sep 12 17:49:22.915295 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:49:22.915310 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:49:22.915325 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:49:22.915340 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:49:22.915353 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:49:22.915367 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:49:22.915382 kernel: .... node #0, CPUs: #1 Sep 12 17:49:22.915398 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:49:22.915414 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:49:22.915430 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:49:22.915445 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 12 17:49:22.915465 kernel: Memory: 1908056K/2037804K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125192K reserved, 0K cma-reserved) Sep 12 17:49:22.915480 kernel: devtmpfs: initialized Sep 12 17:49:22.915495 kernel: x86/mm: Memory block size: 128MB Sep 12 17:49:22.915510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:49:22.915526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:49:22.915541 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:49:22.915556 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:49:22.915572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:49:22.915587 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:49:22.915607 kernel: audit: type=2000 audit(1757699361.155:1): state=initialized audit_enabled=0 res=1 Sep 12 17:49:22.915622 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:49:22.915637 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:49:22.915651 kernel: cpuidle: using governor menu Sep 12 17:49:22.915666 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:49:22.915681 kernel: dca service started, version 1.12.1 Sep 12 17:49:22.915696 kernel: PCI: Using configuration type 1 for base access Sep 12 17:49:22.915712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:49:22.915727 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:49:22.915746 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:49:22.915761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:49:22.915778 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:49:22.915793 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:49:22.915808 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:49:22.915823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:49:22.915838 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:49:22.915853 kernel: ACPI: Interpreter enabled Sep 12 17:49:22.915869 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:49:22.915887 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:49:22.915902 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:49:22.915918 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:49:22.915933 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:49:22.915947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:49:22.916198 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:49:22.916340 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:49:22.916482 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:49:22.916502 kernel: acpiphp: Slot [3] registered Sep 12 17:49:22.916518 kernel: acpiphp: Slot [4] registered Sep 12 17:49:22.916533 kernel: acpiphp: Slot [5] registered Sep 12 17:49:22.916549 kernel: acpiphp: Slot [6] registered Sep 12 17:49:22.916565 kernel: acpiphp: Slot [7] registered Sep 12 17:49:22.916580 kernel: acpiphp: Slot [8] registered Sep 12 17:49:22.916596 kernel: acpiphp: Slot [9] registered Sep 12 17:49:22.916611 kernel: acpiphp: Slot [10] registered Sep 12 17:49:22.916630 kernel: acpiphp: Slot [11] registered Sep 12 17:49:22.916646 kernel: acpiphp: Slot [12] registered Sep 12 17:49:22.916661 kernel: acpiphp: Slot [13] registered Sep 12 17:49:22.916677 kernel: acpiphp: Slot [14] registered Sep 12 17:49:22.916693 kernel: acpiphp: Slot [15] registered Sep 12 17:49:22.916708 kernel: acpiphp: Slot [16] registered Sep 12 17:49:22.916723 kernel: acpiphp: Slot [17] registered Sep 12 17:49:22.916740 kernel: acpiphp: Slot [18] registered Sep 12 17:49:22.916755 kernel: acpiphp: Slot [19] registered Sep 12 17:49:22.916771 kernel: acpiphp: Slot [20] registered Sep 12 17:49:22.916790 kernel: acpiphp: Slot [21] registered Sep 12 17:49:22.916806 kernel: acpiphp: Slot [22] registered Sep 12 17:49:22.916822 kernel: acpiphp: Slot [23] registered Sep 12 17:49:22.916837 kernel: acpiphp: Slot [24] registered Sep 12 17:49:22.916853 kernel: acpiphp: Slot [25] registered Sep 12 17:49:22.916868 kernel: acpiphp: Slot [26] registered Sep 12 17:49:22.916884 kernel: acpiphp: Slot [27] registered Sep 12 17:49:22.916901 kernel: acpiphp: Slot [28] registered Sep 12 17:49:22.916916 kernel: acpiphp: Slot [29] registered Sep 12 17:49:22.916935 kernel: acpiphp: Slot [30] registered Sep 12 17:49:22.916950 kernel: acpiphp: Slot [31] registered Sep 12 17:49:22.916966 kernel: PCI host bridge to bus 0000:00 Sep 12 17:49:22.918191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:49:22.918332 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:49:22.918457 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:49:22.918578 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:49:22.918699 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:49:22.918825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:49:22.919096 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:49:22.919284 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:49:22.919442 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 12 17:49:22.919589 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:49:22.919737 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:49:22.919888 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:49:22.920702 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:49:22.923009 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:49:22.923172 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:49:22.923303 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:49:22.923439 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:49:22.923568 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 12 17:49:22.923702 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:49:22.923829 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:49:22.923962 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 12 17:49:22.924103 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 12 17:49:22.924241 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 12 17:49:22.924373 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 12 17:49:22.924399 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:49:22.924416 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:49:22.924432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:49:22.924448 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:49:22.924463 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:49:22.924480 kernel: iommu: Default domain type: Translated Sep 12 17:49:22.924496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:49:22.924512 kernel: efivars: Registered efivars operations Sep 12 17:49:22.924529 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:49:22.924548 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:49:22.924565 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 17:49:22.924581 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:49:22.924596 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:49:22.924733 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:49:22.924871 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:49:22.925008 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:49:22.925028 kernel: vgaarb: loaded Sep 12 17:49:22.926098 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:49:22.926118 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:49:22.926134 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:49:22.926151 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:49:22.926167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:49:22.926183 kernel: pnp: PnP ACPI init Sep 12 17:49:22.926199 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:49:22.926216 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:49:22.926233 kernel: NET: Registered PF_INET protocol family Sep 12 17:49:22.926252 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:49:22.926268 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:49:22.926284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:49:22.926300 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:49:22.926316 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:49:22.926333 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:49:22.926350 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:49:22.926367 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:49:22.926383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:49:22.926401 kernel: NET: Registered PF_XDP protocol family Sep 12 17:49:22.926560 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:49:22.926716 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:49:22.926846 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:49:22.927012 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:49:22.927162 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:49:22.927310 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:49:22.927331 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:49:22.927353 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:49:22.927369 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:49:22.927385 kernel: clocksource: Switched to clocksource tsc Sep 12 17:49:22.927401 kernel: Initialise system trusted keyrings Sep 12 17:49:22.927417 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:49:22.927433 kernel: Key type asymmetric registered Sep 12 17:49:22.927448 kernel: Asymmetric key parser 'x509' registered Sep 12 17:49:22.927464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:49:22.927480 kernel: io scheduler mq-deadline registered Sep 12 17:49:22.927498 kernel: io scheduler kyber registered Sep 12 17:49:22.927514 kernel: io scheduler bfq registered Sep 12 17:49:22.927529 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:49:22.927545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:49:22.927562 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:49:22.927577 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:49:22.927593 kernel: i8042: Warning: Keylock active Sep 12 17:49:22.927609 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:49:22.927624 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:49:22.927770 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:49:22.927900 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:49:22.928026 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:49:22 UTC (1757699362) Sep 12 17:49:22.930220 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:49:22.930271 kernel: intel_pstate: CPU model not supported Sep 12 17:49:22.930291 kernel: efifb: probing for efifb Sep 12 17:49:22.930308 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 17:49:22.930325 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:49:22.930345 kernel: efifb: scrolling: redraw Sep 12 17:49:22.930362 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:49:22.930379 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:49:22.930395 kernel: fb0: EFI VGA frame buffer device Sep 12 17:49:22.930413 kernel: pstore: Using crash dump compression: deflate Sep 12 17:49:22.930429 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:49:22.930446 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:49:22.930462 kernel: Segment Routing with IPv6 Sep 12 17:49:22.930479 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:49:22.930499 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:49:22.930516 kernel: Key type dns_resolver registered Sep 12 17:49:22.930533 kernel: IPI shorthand broadcast: enabled Sep 12 17:49:22.930550 kernel: sched_clock: Marking stable (2715002164, 179349524)->(2999866467, -105514779) Sep 12 17:49:22.930567 kernel: registered taskstats version 1 Sep 12 17:49:22.930583 kernel: Loading compiled-in X.509 certificates Sep 12 17:49:22.930600 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:49:22.930616 kernel: Demotion targets for Node 0: null Sep 12 17:49:22.930633 kernel: Key type .fscrypt registered Sep 12 17:49:22.930652 kernel: Key type fscrypt-provisioning registered Sep 12 17:49:22.930669 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:49:22.930685 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:49:22.930702 kernel: ima: No architecture policies found Sep 12 17:49:22.930719 kernel: clk: Disabling unused clocks Sep 12 17:49:22.930735 kernel: Warning: unable to open an initial console. Sep 12 17:49:22.930752 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:49:22.930768 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:49:22.930786 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:49:22.930805 kernel: Run /init as init process Sep 12 17:49:22.930822 kernel: with arguments: Sep 12 17:49:22.930838 kernel: /init Sep 12 17:49:22.930855 kernel: with environment: Sep 12 17:49:22.930871 kernel: HOME=/ Sep 12 17:49:22.930887 kernel: TERM=linux Sep 12 17:49:22.930906 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:49:22.930932 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:49:22.930951 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:49:22.930967 systemd[1]: Detected virtualization amazon. Sep 12 17:49:22.930984 systemd[1]: Detected architecture x86-64. Sep 12 17:49:22.930999 systemd[1]: Running in initrd. Sep 12 17:49:22.931015 systemd[1]: No hostname configured, using default hostname. Sep 12 17:49:22.932060 systemd[1]: Hostname set to . Sep 12 17:49:22.932088 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:49:22.932107 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:49:22.932125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:49:22.932144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:49:22.932162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:49:22.932180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:49:22.932202 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:49:22.932221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:49:22.932238 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:49:22.932256 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:49:22.932272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:49:22.932287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:49:22.932302 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:49:22.932321 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:49:22.932336 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:49:22.932351 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:49:22.932367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:49:22.932384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:49:22.932401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:49:22.932419 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:49:22.932438 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:49:22.932457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:49:22.932478 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:49:22.932496 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:49:22.932514 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:49:22.932530 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:49:22.932545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:49:22.932561 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:49:22.932577 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:49:22.932592 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:49:22.932610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:49:22.932626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:49:22.932642 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:49:22.932659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:49:22.932711 systemd-journald[207]: Collecting audit messages is disabled. Sep 12 17:49:22.932750 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:49:22.932766 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:49:22.932784 systemd-journald[207]: Journal started Sep 12 17:49:22.932820 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2561e7fbdd10799587426ce9d4c138) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:49:22.938778 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:49:22.942205 systemd-modules-load[208]: Inserted module 'overlay' Sep 12 17:49:22.947647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:49:22.953019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:49:22.957832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:49:22.966381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:49:22.974431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:49:22.992285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:49:23.000075 kernel: Bridge firewalling registered Sep 12 17:49:22.999299 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:49:23.002228 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 12 17:49:23.009070 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:49:23.008553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:49:23.011980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:49:23.013880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:49:23.014840 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:49:23.018020 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:49:23.022191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:49:23.041954 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:49:23.045869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:49:23.053240 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:49:23.104414 systemd-resolved[253]: Positive Trust Anchors: Sep 12 17:49:23.105465 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:49:23.105533 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:49:23.114591 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 12 17:49:23.115986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:49:23.117590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:49:23.156072 kernel: SCSI subsystem initialized Sep 12 17:49:23.167070 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:49:23.178061 kernel: iscsi: registered transport (tcp) Sep 12 17:49:23.200094 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:49:23.200171 kernel: QLogic iSCSI HBA Driver Sep 12 17:49:23.220272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:49:23.241501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:49:23.245105 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:49:23.289466 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:49:23.291857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:49:23.346070 kernel: raid6: avx512x4 gen() 18026 MB/s Sep 12 17:49:23.364061 kernel: raid6: avx512x2 gen() 18099 MB/s Sep 12 17:49:23.382064 kernel: raid6: avx512x1 gen() 17937 MB/s Sep 12 17:49:23.400062 kernel: raid6: avx2x4 gen() 17912 MB/s Sep 12 17:49:23.418061 kernel: raid6: avx2x2 gen() 17827 MB/s Sep 12 17:49:23.436498 kernel: raid6: avx2x1 gen() 13236 MB/s Sep 12 17:49:23.436581 kernel: raid6: using algorithm avx512x2 gen() 18099 MB/s Sep 12 17:49:23.455350 kernel: raid6: .... xor() 23959 MB/s, rmw enabled Sep 12 17:49:23.455436 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:49:23.477073 kernel: xor: automatically using best checksumming function avx Sep 12 17:49:23.648076 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:49:23.654619 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:49:23.656857 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:49:23.686489 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 12 17:49:23.693309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:49:23.697543 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:49:23.726008 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 12 17:49:23.753509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:49:23.755761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:49:23.815163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:49:23.819017 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:49:23.908062 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:49:23.933317 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:49:23.933594 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:49:23.938057 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:49:23.949077 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:49:23.952096 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:49:23.958475 kernel: AES CTR mode by8 optimization enabled Sep 12 17:49:23.968058 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:49:23.968317 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 17:49:23.967942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:49:23.970985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:49:23.972883 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:49:23.984270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:49:23.992104 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:b3:77:1b:17:2b Sep 12 17:49:23.992972 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:49:24.004571 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:49:24.004604 kernel: GPT:9289727 != 16777215 Sep 12 17:49:24.004625 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:49:24.004645 kernel: GPT:9289727 != 16777215 Sep 12 17:49:24.004663 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:49:24.004682 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:49:24.006567 (udev-worker)[508]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:49:24.030263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:49:24.039081 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:49:24.148621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:49:24.175384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:49:24.176247 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:49:24.194484 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:49:24.195187 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:49:24.206767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:49:24.207553 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:49:24.208827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:49:24.209992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:49:24.211789 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:49:24.214576 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:49:24.239879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:49:24.243835 disk-uuid[695]: Primary Header is updated. Sep 12 17:49:24.243835 disk-uuid[695]: Secondary Entries is updated. Sep 12 17:49:24.243835 disk-uuid[695]: Secondary Header is updated. Sep 12 17:49:24.247555 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:49:25.263116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:49:25.263196 disk-uuid[703]: The operation has completed successfully. Sep 12 17:49:25.408366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:49:25.408492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:49:25.446771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:49:25.461272 sh[963]: Success Sep 12 17:49:25.489348 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:49:25.489430 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:49:25.489453 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:49:25.502062 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 17:49:25.603595 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:49:25.608137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:49:25.618472 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:49:25.640080 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (986) Sep 12 17:49:25.644779 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:49:25.644858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:49:25.676983 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:49:25.677063 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:49:25.677078 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:49:25.681916 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:49:25.682815 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:49:25.683661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:49:25.685220 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:49:25.688198 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:49:25.733073 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1019) Sep 12 17:49:25.737421 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:49:25.737489 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:49:25.749860 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:49:25.749939 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:49:25.759101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:49:25.759892 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:49:25.762777 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:49:25.816008 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:49:25.821916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:49:25.889799 systemd-networkd[1156]: lo: Link UP Sep 12 17:49:25.890697 systemd-networkd[1156]: lo: Gained carrier Sep 12 17:49:25.893955 systemd-networkd[1156]: Enumeration completed Sep 12 17:49:25.895278 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:49:25.895285 systemd-networkd[1156]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:49:25.898547 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:49:25.903011 systemd[1]: Reached target network.target - Network. Sep 12 17:49:25.905288 systemd-networkd[1156]: eth0: Link UP Sep 12 17:49:25.905297 systemd-networkd[1156]: eth0: Gained carrier Sep 12 17:49:25.905318 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:49:25.918376 systemd-networkd[1156]: eth0: DHCPv4 address 172.31.28.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:49:25.980214 ignition[1106]: Ignition 2.21.0 Sep 12 17:49:25.980231 ignition[1106]: Stage: fetch-offline Sep 12 17:49:25.980419 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:25.980434 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:25.980728 ignition[1106]: Ignition finished successfully Sep 12 17:49:25.982252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:49:25.984144 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:49:26.012229 ignition[1165]: Ignition 2.21.0 Sep 12 17:49:26.012243 ignition[1165]: Stage: fetch Sep 12 17:49:26.012627 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:26.012639 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:26.012760 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.056659 ignition[1165]: PUT result: OK Sep 12 17:49:26.061927 ignition[1165]: parsed url from cmdline: "" Sep 12 17:49:26.061937 ignition[1165]: no config URL provided Sep 12 17:49:26.061946 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:49:26.061958 ignition[1165]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:49:26.061986 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.064200 ignition[1165]: PUT result: OK Sep 12 17:49:26.064296 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:49:26.065208 ignition[1165]: GET result: OK Sep 12 17:49:26.065300 ignition[1165]: parsing config with SHA512: 93507fbf77f209d37c2bf78abcb09151db2573ad80d31b97f3e2247baefe4432f5805699d93fd90c6ac66db87053e5405112dd504dc077d08ca959c944363751 Sep 12 17:49:26.070696 unknown[1165]: fetched base config from "system" Sep 12 17:49:26.070711 unknown[1165]: fetched base config from "system" Sep 12 17:49:26.071316 ignition[1165]: fetch: fetch complete Sep 12 17:49:26.070717 unknown[1165]: fetched user config from "aws" Sep 12 17:49:26.071323 ignition[1165]: fetch: fetch passed Sep 12 17:49:26.071389 ignition[1165]: Ignition finished successfully Sep 12 17:49:26.074032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:49:26.076153 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:49:26.104112 ignition[1172]: Ignition 2.21.0 Sep 12 17:49:26.104128 ignition[1172]: Stage: kargs Sep 12 17:49:26.104512 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:26.104525 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:26.104639 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.106250 ignition[1172]: PUT result: OK Sep 12 17:49:26.110867 ignition[1172]: kargs: kargs passed Sep 12 17:49:26.111085 ignition[1172]: Ignition finished successfully Sep 12 17:49:26.112819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:49:26.114722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:49:26.145953 ignition[1179]: Ignition 2.21.0 Sep 12 17:49:26.145971 ignition[1179]: Stage: disks Sep 12 17:49:26.146413 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:26.146426 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:26.146548 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.150400 ignition[1179]: PUT result: OK Sep 12 17:49:26.155329 ignition[1179]: disks: disks passed Sep 12 17:49:26.155411 ignition[1179]: Ignition finished successfully Sep 12 17:49:26.156887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:49:26.157905 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:49:26.158341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:49:26.158990 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:49:26.159641 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:49:26.160542 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:49:26.162519 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:49:26.204702 systemd-fsck[1187]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:49:26.207736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:49:26.210077 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:49:26.364075 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:49:26.364901 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:49:26.365770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:49:26.368482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:49:26.371127 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:49:26.372301 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:49:26.372983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:49:26.373011 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:49:26.378984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:49:26.380669 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:49:26.400093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1206) Sep 12 17:49:26.405591 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:49:26.405681 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:49:26.414243 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:49:26.414314 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:49:26.418293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:49:26.479372 initrd-setup-root[1230]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:49:26.486217 initrd-setup-root[1237]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:49:26.491683 initrd-setup-root[1244]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:49:26.496229 initrd-setup-root[1251]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:49:26.620006 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:49:26.621875 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:49:26.625190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:49:26.639702 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:49:26.641474 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:49:26.670131 ignition[1319]: INFO : Ignition 2.21.0 Sep 12 17:49:26.670131 ignition[1319]: INFO : Stage: mount Sep 12 17:49:26.671347 ignition[1319]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:26.671347 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:26.671347 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.674186 ignition[1319]: INFO : PUT result: OK Sep 12 17:49:26.675488 ignition[1319]: INFO : mount: mount passed Sep 12 17:49:26.676076 ignition[1319]: INFO : Ignition finished successfully Sep 12 17:49:26.675772 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:49:26.677622 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:49:26.679700 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:49:26.697896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:49:26.732066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1330) Sep 12 17:49:26.736204 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:49:26.736271 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:49:26.745053 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:49:26.745134 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:49:26.747471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:49:26.781306 ignition[1347]: INFO : Ignition 2.21.0 Sep 12 17:49:26.781306 ignition[1347]: INFO : Stage: files Sep 12 17:49:26.782997 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:26.782997 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:26.782997 ignition[1347]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:26.784483 ignition[1347]: INFO : PUT result: OK Sep 12 17:49:26.788063 ignition[1347]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:49:26.789453 ignition[1347]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:49:26.789453 ignition[1347]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:49:26.793475 ignition[1347]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:49:26.794395 ignition[1347]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:49:26.795743 unknown[1347]: wrote ssh authorized keys file for user: core Sep 12 17:49:26.796319 ignition[1347]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:49:26.798833 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:49:26.799833 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:49:26.873784 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:49:27.250319 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:49:27.250319 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:49:27.250319 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:49:27.462616 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:49:27.585707 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:49:27.585707 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:49:27.587644 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:49:27.593128 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:49:27.593128 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:49:27.593128 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:49:27.595888 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:49:27.595888 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:49:27.595888 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:49:27.672208 systemd-networkd[1156]: eth0: Gained IPv6LL Sep 12 17:49:27.923445 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:49:28.327208 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:49:28.327208 ignition[1347]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:49:28.329208 ignition[1347]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:49:28.334125 ignition[1347]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:49:28.334125 ignition[1347]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:49:28.336190 ignition[1347]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:49:28.336190 ignition[1347]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:49:28.336190 ignition[1347]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:49:28.336190 ignition[1347]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:49:28.336190 ignition[1347]: INFO : files: files passed Sep 12 17:49:28.336190 ignition[1347]: INFO : Ignition finished successfully Sep 12 17:49:28.336635 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:49:28.339168 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:49:28.342264 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:49:28.350515 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:49:28.350621 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:49:28.359645 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:49:28.359645 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:49:28.361406 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:49:28.363276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:49:28.364722 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:49:28.366537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:49:28.425705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:49:28.425852 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:49:28.427239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:49:28.428362 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:49:28.429218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:49:28.430421 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:49:28.468935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:49:28.471485 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:49:28.494840 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:49:28.495668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:49:28.496637 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:49:28.497522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:49:28.497755 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:49:28.498963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:49:28.499890 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:49:28.500705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:49:28.501498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:49:28.502262 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:49:28.503106 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:49:28.503866 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:49:28.504677 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:49:28.505494 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:49:28.506581 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:49:28.507486 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:49:28.508212 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:49:28.508440 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:49:28.509461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:49:28.510283 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:49:28.511073 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:49:28.511211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:49:28.511824 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:49:28.512066 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:49:28.513404 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:49:28.513647 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:49:28.514371 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:49:28.514570 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:49:28.517164 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:49:28.520300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:49:28.520939 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:49:28.521205 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:49:28.523333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:49:28.524184 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:49:28.530910 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:49:28.531532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:49:28.553292 ignition[1401]: INFO : Ignition 2.21.0 Sep 12 17:49:28.553292 ignition[1401]: INFO : Stage: umount Sep 12 17:49:28.554743 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:49:28.554743 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:49:28.554743 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:49:28.554743 ignition[1401]: INFO : PUT result: OK Sep 12 17:49:28.560497 ignition[1401]: INFO : umount: umount passed Sep 12 17:49:28.560497 ignition[1401]: INFO : Ignition finished successfully Sep 12 17:49:28.563030 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:49:28.563987 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:49:28.564172 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:49:28.565438 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:49:28.565546 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:49:28.566004 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:49:28.566100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:49:28.567512 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:49:28.567574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:49:28.568133 systemd[1]: Stopped target network.target - Network. Sep 12 17:49:28.568948 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:49:28.569014 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:49:28.571139 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:49:28.571532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:49:28.576130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:49:28.576631 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:49:28.577608 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:49:28.578278 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:49:28.578347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:49:28.579099 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:49:28.579161 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:49:28.579668 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:49:28.579756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:49:28.580315 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:49:28.580383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:49:28.581144 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:49:28.581721 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:49:28.583999 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:49:28.584152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:49:28.585214 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:49:28.585365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:49:28.590431 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:49:28.590815 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:49:28.590969 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:49:28.593080 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:49:28.595132 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:49:28.595563 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:49:28.595619 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:49:28.596224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:49:28.596300 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:49:28.598095 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:49:28.599120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:49:28.599195 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:49:28.599771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:49:28.599832 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:49:28.603209 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:49:28.603284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:49:28.605137 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:49:28.605208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:49:28.606124 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:49:28.610600 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:49:28.610706 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:49:28.625236 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:49:28.625467 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:49:28.627261 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:49:28.627410 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:49:28.628915 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:49:28.629002 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:49:28.629825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:49:28.629871 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:49:28.630538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:49:28.630604 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:49:28.631830 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:49:28.631889 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:49:28.632943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:49:28.633011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:49:28.635190 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:49:28.636620 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:49:28.636688 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:49:28.637978 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:49:28.638099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:49:28.638976 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:49:28.639032 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:49:28.639946 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:49:28.640000 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:49:28.641166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:49:28.641219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:49:28.645836 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:49:28.645911 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:49:28.645959 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:49:28.646010 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:49:28.663159 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:49:28.663286 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:49:28.664489 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:49:28.665830 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:49:28.694541 systemd[1]: Switching root. Sep 12 17:49:28.719782 systemd-journald[207]: Journal stopped Sep 12 17:49:30.361150 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 12 17:49:30.361260 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:49:30.361285 kernel: SELinux: policy capability open_perms=1 Sep 12 17:49:30.361306 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:49:30.361327 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:49:30.361351 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:49:30.361377 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:49:30.361397 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:49:30.361415 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:49:30.361441 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:49:30.361460 kernel: audit: type=1403 audit(1757699369.071:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:49:30.361482 systemd[1]: Successfully loaded SELinux policy in 69.899ms. Sep 12 17:49:30.361516 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.264ms. Sep 12 17:49:30.361539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:49:30.361565 systemd[1]: Detected virtualization amazon. Sep 12 17:49:30.361589 systemd[1]: Detected architecture x86-64. Sep 12 17:49:30.361610 systemd[1]: Detected first boot. Sep 12 17:49:30.361631 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:49:30.361652 zram_generator::config[1446]: No configuration found. Sep 12 17:49:30.361672 kernel: Guest personality initialized and is inactive Sep 12 17:49:30.361692 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:49:30.361711 kernel: Initialized host personality Sep 12 17:49:30.361733 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:49:30.361756 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:49:30.361778 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:49:30.361799 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:49:30.361820 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:49:30.361840 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:49:30.361861 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:49:30.361882 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:49:30.361902 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:49:30.361927 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:49:30.361950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:49:30.361970 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:49:30.361991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:49:30.362012 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:49:30.370084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:49:30.370146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:49:30.370169 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:49:30.370192 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:49:30.370222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:49:30.370245 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:49:30.370268 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:49:30.370290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:49:30.370310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:49:30.370331 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:49:30.370351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:49:30.370375 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:49:30.370395 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:49:30.370416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:49:30.370436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:49:30.370457 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:49:30.370486 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:49:30.370504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:49:30.370529 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:49:30.370550 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:49:30.370575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:49:30.370596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:49:30.370616 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:49:30.370638 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:49:30.370658 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:49:30.370679 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:49:30.370702 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:49:30.370725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:30.370745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:49:30.370769 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:49:30.370788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:49:30.370820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:49:30.370842 systemd[1]: Reached target machines.target - Containers. Sep 12 17:49:30.370863 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:49:30.370883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:49:30.370906 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:49:30.370928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:49:30.370950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:49:30.370976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:49:30.370998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:49:30.371020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:49:30.381593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:49:30.381637 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:49:30.381658 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:49:30.381678 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:49:30.381698 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:49:30.381726 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:49:30.381748 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:49:30.381768 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:49:30.381788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:49:30.381809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:49:30.381829 kernel: fuse: init (API version 7.41) Sep 12 17:49:30.381849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:49:30.381869 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:49:30.381891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:49:30.381915 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:49:30.381939 systemd[1]: Stopped verity-setup.service. Sep 12 17:49:30.381958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:30.381978 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:49:30.381998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:49:30.382018 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:49:30.386454 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:49:30.386499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:49:30.386528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:49:30.386549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:49:30.386571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:49:30.386592 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:49:30.386611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:49:30.386634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:49:30.386653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:49:30.386910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:49:30.386939 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:49:30.386961 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:49:30.386982 kernel: loop: module loaded Sep 12 17:49:30.387010 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:49:30.387029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:49:30.387141 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:49:30.387162 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:49:30.387181 kernel: ACPI: bus type drm_connector registered Sep 12 17:49:30.387200 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:49:30.387219 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:49:30.387240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:49:30.387266 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:49:30.387287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:49:30.387309 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:49:30.387331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:49:30.387354 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:49:30.387382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:49:30.387405 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:49:30.387427 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:49:30.387450 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:49:30.387470 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:49:30.387491 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:49:30.387510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:49:30.387530 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:49:30.387553 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:49:30.387626 systemd-journald[1532]: Collecting audit messages is disabled. Sep 12 17:49:30.387666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:49:30.387687 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:49:30.387709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:49:30.387729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:49:30.387751 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:49:30.387782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:49:30.387808 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:49:30.387829 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:49:30.387850 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:49:30.387872 systemd-journald[1532]: Journal started Sep 12 17:49:30.387912 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec2561e7fbdd10799587426ce9d4c138) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:49:30.395385 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:49:29.819413 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:49:29.832646 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:49:29.833178 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:49:30.401123 kernel: loop0: detected capacity change from 0 to 111000 Sep 12 17:49:30.447599 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:49:30.457116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:49:30.468405 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:49:30.478106 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:49:30.482156 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec2561e7fbdd10799587426ce9d4c138 is 34.269ms for 1031 entries. Sep 12 17:49:30.482156 systemd-journald[1532]: System Journal (/var/log/journal/ec2561e7fbdd10799587426ce9d4c138) is 8M, max 195.6M, 187.6M free. Sep 12 17:49:30.539528 systemd-journald[1532]: Received client request to flush runtime journal. Sep 12 17:49:30.540005 kernel: loop1: detected capacity change from 0 to 72360 Sep 12 17:49:30.493471 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Sep 12 17:49:30.493493 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Sep 12 17:49:30.499832 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:49:30.504307 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:49:30.541264 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:49:30.582072 kernel: loop2: detected capacity change from 0 to 229808 Sep 12 17:49:30.587426 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:49:30.593505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:49:30.629833 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Sep 12 17:49:30.630273 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Sep 12 17:49:30.636296 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:49:30.652076 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 17:49:30.727131 kernel: loop4: detected capacity change from 0 to 111000 Sep 12 17:49:30.760311 kernel: loop5: detected capacity change from 0 to 72360 Sep 12 17:49:30.809064 kernel: loop6: detected capacity change from 0 to 229808 Sep 12 17:49:30.838089 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:49:30.865099 kernel: loop7: detected capacity change from 0 to 128016 Sep 12 17:49:30.900077 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:49:30.902464 (sd-merge)[1607]: Merged extensions into '/usr'. Sep 12 17:49:30.909067 systemd[1]: Reload requested from client PID 1562 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:49:30.909235 systemd[1]: Reloading... Sep 12 17:49:31.047109 zram_generator::config[1633]: No configuration found. Sep 12 17:49:31.368067 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:49:31.401152 systemd[1]: Reloading finished in 491 ms. Sep 12 17:49:31.414648 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:49:31.417949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:49:31.430236 systemd[1]: Starting ensure-sysext.service... Sep 12 17:49:31.434314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:49:31.468311 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:49:31.468364 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:49:31.468747 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:49:31.470676 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:49:31.471196 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:49:31.471215 systemd[1]: Reloading... Sep 12 17:49:31.476131 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:49:31.476587 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Sep 12 17:49:31.476805 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Sep 12 17:49:31.487577 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:49:31.487596 systemd-tmpfiles[1686]: Skipping /boot Sep 12 17:49:31.509314 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:49:31.509333 systemd-tmpfiles[1686]: Skipping /boot Sep 12 17:49:31.583064 zram_generator::config[1709]: No configuration found. Sep 12 17:49:31.833172 systemd[1]: Reloading finished in 361 ms. Sep 12 17:49:31.857406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:49:31.874726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:49:31.884857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:49:31.889339 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:49:31.899375 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:49:31.903409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:49:31.907257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:49:31.912440 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:49:31.918689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.918973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:49:31.924196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:49:31.930837 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:49:31.943166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:49:31.945113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:49:31.945331 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:49:31.945476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.957757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:49:31.961891 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.962393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:49:31.963891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:49:31.964067 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:49:31.964219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.980743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.983133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:49:31.990523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:49:31.992335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:49:31.992632 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:49:31.993083 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:49:31.993833 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:49:31.997331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:49:31.997604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:49:32.000103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:49:32.005497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:49:32.011801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:49:32.020217 systemd[1]: Finished ensure-sysext.service. Sep 12 17:49:32.021436 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:49:32.021674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:49:32.026100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:49:32.033461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:49:32.039775 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:49:32.040658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:49:32.043282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:49:32.051354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:49:32.064805 systemd-udevd[1771]: Using default interface naming scheme 'v255'. Sep 12 17:49:32.089394 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:49:32.099126 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:49:32.103209 augenrules[1808]: No rules Sep 12 17:49:32.105262 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:49:32.105555 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:49:32.122393 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:49:32.124147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:49:32.141211 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:49:32.148201 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:49:32.241827 systemd-resolved[1770]: Positive Trust Anchors: Sep 12 17:49:32.241852 systemd-resolved[1770]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:49:32.241903 systemd-resolved[1770]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:49:32.255467 systemd-resolved[1770]: Defaulting to hostname 'linux'. Sep 12 17:49:32.261315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:49:32.262538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:49:32.263718 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:49:32.265318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:49:32.267180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:49:32.267819 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:49:32.269243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:49:32.269928 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:49:32.271526 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:49:32.272084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:49:32.272133 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:49:32.273118 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:49:32.276249 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:49:32.279833 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:49:32.290129 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:49:32.291334 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:49:32.293189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:49:32.306066 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:49:32.307190 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:49:32.314472 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:49:32.324319 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:49:32.326161 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:49:32.326859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:49:32.326905 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:49:32.331359 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:49:32.336108 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:49:32.339598 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:49:32.345251 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:49:32.347357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:49:32.349125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:49:32.352355 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:49:32.355923 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:49:32.362333 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:49:32.367858 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:49:32.378354 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:49:32.382469 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:49:32.387523 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:49:32.393065 jq[1850]: false Sep 12 17:49:32.418298 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:49:32.422315 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:49:32.425376 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:49:32.428297 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:49:32.449163 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:49:32.453403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:49:32.454643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:49:32.454930 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:49:32.490605 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Refreshing passwd entry cache Sep 12 17:49:32.488879 systemd-networkd[1824]: lo: Link UP Sep 12 17:49:32.487918 oslogin_cache_refresh[1852]: Refreshing passwd entry cache Sep 12 17:49:32.488886 systemd-networkd[1824]: lo: Gained carrier Sep 12 17:49:32.489721 systemd-networkd[1824]: Enumeration completed Sep 12 17:49:32.489854 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:49:32.492272 systemd[1]: Reached target network.target - Network. Sep 12 17:49:32.499263 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:49:32.502960 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Failure getting users, quitting Sep 12 17:49:32.503952 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:49:32.503952 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Refreshing group entry cache Sep 12 17:49:32.503137 oslogin_cache_refresh[1852]: Failure getting users, quitting Sep 12 17:49:32.503166 oslogin_cache_refresh[1852]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:49:32.503218 oslogin_cache_refresh[1852]: Refreshing group entry cache Sep 12 17:49:32.508332 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Failure getting groups, quitting Sep 12 17:49:32.506957 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:49:32.510351 oslogin_cache_refresh[1852]: Failure getting groups, quitting Sep 12 17:49:32.510546 google_oslogin_nss_cache[1852]: oslogin_cache_refresh[1852]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:49:32.510384 oslogin_cache_refresh[1852]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:49:32.512173 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:49:32.523654 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:49:32.525414 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:49:32.540178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:49:32.540458 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:49:32.545595 extend-filesystems[1851]: Found /dev/nvme0n1p6 Sep 12 17:49:32.555222 jq[1862]: true Sep 12 17:49:32.572613 update_engine[1860]: I20250912 17:49:32.564877 1860 main.cc:92] Flatcar Update Engine starting Sep 12 17:49:32.580781 extend-filesystems[1851]: Found /dev/nvme0n1p9 Sep 12 17:49:32.603751 dbus-daemon[1848]: [system] SELinux support is enabled Sep 12 17:49:32.603953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:49:32.611864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:49:32.611897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:49:32.613417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:49:32.613445 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:49:32.620502 extend-filesystems[1851]: Checking size of /dev/nvme0n1p9 Sep 12 17:49:32.662220 jq[1884]: true Sep 12 17:49:32.676488 update_engine[1860]: I20250912 17:49:32.676391 1860 update_check_scheduler.cc:74] Next update check in 4m24s Sep 12 17:49:32.680066 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:49:32.680928 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:49:32.683011 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:49:32.694536 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:49:32.704128 tar[1873]: linux-amd64/LICENSE Sep 12 17:49:32.704128 tar[1873]: linux-amd64/helm Sep 12 17:49:32.712196 extend-filesystems[1851]: Resized partition /dev/nvme0n1p9 Sep 12 17:49:32.735503 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:49:32.752846 ntpd[1854]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: ---------------------------------------------------- Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: corporation. Support and training for ntp-4 are Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: available at https://www.nwtime.org/support Sep 12 17:49:32.753854 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: ---------------------------------------------------- Sep 12 17:49:32.752882 ntpd[1854]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:49:32.752893 ntpd[1854]: ---------------------------------------------------- Sep 12 17:49:32.752904 ntpd[1854]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:49:32.752915 ntpd[1854]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:49:32.752926 ntpd[1854]: corporation. Support and training for ntp-4 are Sep 12 17:49:32.752937 ntpd[1854]: available at https://www.nwtime.org/support Sep 12 17:49:32.752950 ntpd[1854]: ---------------------------------------------------- Sep 12 17:49:32.765143 ntpd[1854]: proto: precision = 0.091 usec (-23) Sep 12 17:49:32.773712 extend-filesystems[1912]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:49:32.776320 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: proto: precision = 0.091 usec (-23) Sep 12 17:49:32.776320 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: basedate set to 2025-08-31 Sep 12 17:49:32.776320 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: gps base set to 2025-08-31 (week 2382) Sep 12 17:49:32.774767 ntpd[1854]: basedate set to 2025-08-31 Sep 12 17:49:32.774792 ntpd[1854]: gps base set to 2025-08-31 (week 2382) Sep 12 17:49:32.781808 (udev-worker)[1845]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:49:32.786621 ntpd[1854]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:49:32.799071 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:49:32.797091 systemd-networkd[1824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:49:32.799224 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:49:32.799224 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:49:32.794863 ntpd[1854]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:49:32.797098 systemd-networkd[1824]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:49:32.809279 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Listen normally on 3 lo [::1]:123 Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: bind(20) AF_INET6 fe80::4b3:77ff:fe1b:172b%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: unable to create socket on eth0 (4) for fe80::4b3:77ff:fe1b:172b%2#123 Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: failed to init interface for address fe80::4b3:77ff:fe1b:172b%2 Sep 12 17:49:32.816645 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: Listening on routing socket on fd #20 for interface updates Sep 12 17:49:32.810845 ntpd[1854]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:49:32.809657 systemd-networkd[1824]: eth0: Link UP Sep 12 17:49:32.810899 ntpd[1854]: Listen normally on 3 lo [::1]:123 Sep 12 17:49:32.809990 systemd-networkd[1824]: eth0: Gained carrier Sep 12 17:49:32.810953 ntpd[1854]: bind(20) AF_INET6 fe80::4b3:77ff:fe1b:172b%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:49:32.810022 systemd-networkd[1824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:49:32.810975 ntpd[1854]: unable to create socket on eth0 (4) for fe80::4b3:77ff:fe1b:172b%2#123 Sep 12 17:49:32.810674 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:49:32.810990 ntpd[1854]: failed to init interface for address fe80::4b3:77ff:fe1b:172b%2 Sep 12 17:49:32.811024 ntpd[1854]: Listening on routing socket on fd #20 for interface updates Sep 12 17:49:32.830147 dbus-daemon[1848]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1824 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:49:32.829894 systemd-networkd[1824]: eth0: DHCPv4 address 172.31.28.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:49:32.841396 ntpd[1854]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:49:32.843635 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:49:32.849797 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:49:32.849797 ntpd[1854]: 12 Sep 17:49:32 ntpd[1854]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:49:32.841441 ntpd[1854]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:49:32.845282 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:49:32.859672 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:49:33.006884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:49:33.006964 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:49:33.006988 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 12 17:49:33.007008 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:49:33.007027 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:32.994 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:32.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:33.001 INFO Fetch successful Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:33.001 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:33.005 INFO Fetch successful Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:33.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:49:33.007400 coreos-metadata[1846]: Sep 12 17:49:33.007 INFO Fetch successful Sep 12 17:49:33.008068 coreos-metadata[1846]: Sep 12 17:49:33.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:49:33.010980 systemd-logind[1859]: New seat seat0. Sep 12 17:49:33.011821 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.012 INFO Fetch successful Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.013 INFO Fetch failed with 404: resource not found Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.015 INFO Fetch successful Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.015 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.020 INFO Fetch successful Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.042 INFO Fetch successful Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.045 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.046 INFO Fetch successful Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.047 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:49:33.055888 coreos-metadata[1846]: Sep 12 17:49:33.048 INFO Fetch successful Sep 12 17:49:33.058775 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:49:33.075944 extend-filesystems[1912]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:49:33.075944 extend-filesystems[1912]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:49:33.075944 extend-filesystems[1912]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:49:33.116161 extend-filesystems[1851]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:49:33.077898 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:49:33.122013 bash[1944]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:49:33.099018 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:49:33.106086 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:49:33.123464 systemd[1]: Starting sshkeys.service... Sep 12 17:49:33.223773 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:49:33.232269 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:49:33.247607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:49:33.249008 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:49:33.479195 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:49:33.483312 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:49:33.483950 dbus-daemon[1848]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1932 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:49:33.491514 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:49:33.504222 locksmithd[1909]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:49:33.557901 coreos-metadata[1978]: Sep 12 17:49:33.557 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:49:33.558800 coreos-metadata[1978]: Sep 12 17:49:33.558 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:49:33.559548 coreos-metadata[1978]: Sep 12 17:49:33.559 INFO Fetch successful Sep 12 17:49:33.559630 coreos-metadata[1978]: Sep 12 17:49:33.559 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:49:33.561072 coreos-metadata[1978]: Sep 12 17:49:33.561 INFO Fetch successful Sep 12 17:49:33.567696 unknown[1978]: wrote ssh authorized keys file for user: core Sep 12 17:49:33.629597 systemd-logind[1859]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:49:33.635985 update-ssh-keys[2035]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:49:33.638908 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:49:33.649338 systemd[1]: Finished sshkeys.service. Sep 12 17:49:33.665657 systemd-logind[1859]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:49:33.683267 systemd-logind[1859]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:49:33.692958 containerd[1903]: time="2025-09-12T17:49:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:49:33.736588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:49:33.762707 containerd[1903]: time="2025-09-12T17:49:33.762661886Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.835910026Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.53µs" Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.835949344Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.835972117Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.836802893Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.836829959Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.836860495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.836931677Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:49:33.837071 containerd[1903]: time="2025-09-12T17:49:33.836946955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846016448Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846078083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846119300Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846133103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846287537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846557936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846600165Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.846616568Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:49:33.847370 containerd[1903]: time="2025-09-12T17:49:33.847294340Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:49:33.848181 containerd[1903]: time="2025-09-12T17:49:33.848161070Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:49:33.848342 containerd[1903]: time="2025-09-12T17:49:33.848327652Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:49:33.856249 containerd[1903]: time="2025-09-12T17:49:33.856202237Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856420648Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856522103Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856550076Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856568863Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856583737Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856599663Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856616656Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856633027Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856647427Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856665349Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856683103Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856832183Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:49:33.857290 containerd[1903]: time="2025-09-12T17:49:33.856856630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.858429311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859848912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859888211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859925219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859945539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859960941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.859995338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.860012381Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.860028269Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.860154910Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.860173461Z" level=info msg="Start snapshots syncer" Sep 12 17:49:33.861513 containerd[1903]: time="2025-09-12T17:49:33.860227016Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:49:33.862660 containerd[1903]: time="2025-09-12T17:49:33.862545260Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:49:33.863019 containerd[1903]: time="2025-09-12T17:49:33.862878795Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:49:33.863149 containerd[1903]: time="2025-09-12T17:49:33.863130906Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:49:33.864211 containerd[1903]: time="2025-09-12T17:49:33.864057995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:49:33.864211 containerd[1903]: time="2025-09-12T17:49:33.864105068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:49:33.864211 containerd[1903]: time="2025-09-12T17:49:33.864145832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:49:33.864211 containerd[1903]: time="2025-09-12T17:49:33.864162076Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:49:33.864211 containerd[1903]: time="2025-09-12T17:49:33.864178630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:49:33.864540 containerd[1903]: time="2025-09-12T17:49:33.864192992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:49:33.864540 containerd[1903]: time="2025-09-12T17:49:33.864436056Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:49:33.864540 containerd[1903]: time="2025-09-12T17:49:33.864473518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:49:33.864853 containerd[1903]: time="2025-09-12T17:49:33.864772933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:49:33.864853 containerd[1903]: time="2025-09-12T17:49:33.864798700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865856206Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865895683Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865922951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865938650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865951506Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865966483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.865982154Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.866005962Z" level=info msg="runtime interface created" Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.866013217Z" level=info msg="created NRI interface" Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.866025711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.866072270Z" level=info msg="Connect containerd service" Sep 12 17:49:33.866556 containerd[1903]: time="2025-09-12T17:49:33.866115656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:49:33.871446 containerd[1903]: time="2025-09-12T17:49:33.870837179Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:49:33.961345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:49:33.967453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:49:34.072559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:49:34.073567 polkitd[2011]: Started polkitd version 126 Sep 12 17:49:34.085975 sshd_keygen[1887]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:49:34.107131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:49:34.116592 polkitd[2011]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:49:34.117163 polkitd[2011]: Loading rules from directory /run/polkit-1/rules.d Sep 12 17:49:34.117222 polkitd[2011]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:49:34.117654 polkitd[2011]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 17:49:34.117682 polkitd[2011]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:49:34.117733 polkitd[2011]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:49:34.119666 polkitd[2011]: Finished loading, compiling and executing 2 rules Sep 12 17:49:34.119998 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:49:34.121962 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:49:34.127461 polkitd[2011]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:49:34.151394 systemd-hostnamed[1932]: Hostname set to (transient) Sep 12 17:49:34.153346 systemd-resolved[1770]: System hostname changed to 'ip-172-31-28-120'. Sep 12 17:49:34.162533 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:49:34.169378 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:49:34.190300 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:49:34.190582 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:49:34.197748 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:49:34.224020 containerd[1903]: time="2025-09-12T17:49:34.223965248Z" level=info msg="Start subscribing containerd event" Sep 12 17:49:34.224243 containerd[1903]: time="2025-09-12T17:49:34.224203767Z" level=info msg="Start recovering state" Sep 12 17:49:34.224528 containerd[1903]: time="2025-09-12T17:49:34.224241967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:49:34.224693 containerd[1903]: time="2025-09-12T17:49:34.224643638Z" level=info msg="Start event monitor" Sep 12 17:49:34.224693 containerd[1903]: time="2025-09-12T17:49:34.224668061Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:49:34.226063 containerd[1903]: time="2025-09-12T17:49:34.224838410Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:49:34.226176 containerd[1903]: time="2025-09-12T17:49:34.226156398Z" level=info msg="Start streaming server" Sep 12 17:49:34.226257 containerd[1903]: time="2025-09-12T17:49:34.226245170Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:49:34.226383 containerd[1903]: time="2025-09-12T17:49:34.226368418Z" level=info msg="runtime interface starting up..." Sep 12 17:49:34.226443 containerd[1903]: time="2025-09-12T17:49:34.226432190Z" level=info msg="starting plugins..." Sep 12 17:49:34.226514 containerd[1903]: time="2025-09-12T17:49:34.226501889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:49:34.226943 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:49:34.227792 containerd[1903]: time="2025-09-12T17:49:34.227767631Z" level=info msg="containerd successfully booted in 0.538478s" Sep 12 17:49:34.237624 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:49:34.243357 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:49:34.247410 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:49:34.248808 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:49:34.350100 tar[1873]: linux-amd64/README.md Sep 12 17:49:34.371297 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:49:34.648249 systemd-networkd[1824]: eth0: Gained IPv6LL Sep 12 17:49:34.651681 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:49:34.652992 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:49:34.655515 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:49:34.661614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:49:34.670370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:49:34.713523 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:49:34.760532 amazon-ssm-agent[2124]: Initializing new seelog logger Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: New Seelog Logger Creation Complete Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 processing appconfig overrides Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.761689 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.761836 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 processing appconfig overrides Sep 12 17:49:34.762129 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.762129 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.762129 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 processing appconfig overrides Sep 12 17:49:34.762129 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7616 INFO Proxy environment variables: Sep 12 17:49:34.764677 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.764677 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:34.764798 amazon-ssm-agent[2124]: 2025/09/12 17:49:34 processing appconfig overrides Sep 12 17:49:34.861579 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7616 INFO https_proxy: Sep 12 17:49:34.960295 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7616 INFO http_proxy: Sep 12 17:49:35.037470 amazon-ssm-agent[2124]: 2025/09/12 17:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:35.037470 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:49:35.037620 amazon-ssm-agent[2124]: 2025/09/12 17:49:35 processing appconfig overrides Sep 12 17:49:35.058904 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7616 INFO no_proxy: Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7617 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7618 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7983 INFO Agent will take identity from EC2 Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7999 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7999 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7999 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7999 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.7999 INFO [Registrar] Starting registrar module Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.8010 INFO [EC2Identity] Checking disk for registration info Sep 12 17:49:35.070061 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.8010 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.8011 INFO [EC2Identity] Generating registration keypair Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.9952 INFO [EC2Identity] Checking write access before registering Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:34.9957 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0372 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0372 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0373 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0373 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0697 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:49:35.070430 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0699 INFO [CredentialRefresher] Credentials ready Sep 12 17:49:35.156715 amazon-ssm-agent[2124]: 2025-09-12 17:49:35.0702 INFO [CredentialRefresher] Next credential rotation will be in 29.999991344566666 minutes Sep 12 17:49:36.083215 amazon-ssm-agent[2124]: 2025-09-12 17:49:36.0826 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:49:36.184837 amazon-ssm-agent[2124]: 2025-09-12 17:49:36.0848 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2146) started Sep 12 17:49:36.257120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:49:36.258583 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:49:36.260150 systemd[1]: Startup finished in 2.821s (kernel) + 6.379s (initrd) + 7.256s (userspace) = 16.458s. Sep 12 17:49:36.269797 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:49:36.285511 amazon-ssm-agent[2124]: 2025-09-12 17:49:36.0848 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:49:36.422006 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:49:36.423518 systemd[1]: Started sshd@0-172.31.28.120:22-139.178.68.195:42016.service - OpenSSH per-connection server daemon (139.178.68.195:42016). Sep 12 17:49:36.630139 sshd[2168]: Accepted publickey for core from 139.178.68.195 port 42016 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:36.631801 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:36.640459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:49:36.642476 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:49:36.661831 systemd-logind[1859]: New session 1 of user core. Sep 12 17:49:36.680342 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:49:36.685202 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:49:36.701102 (systemd)[2177]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:49:36.705355 systemd-logind[1859]: New session c1 of user core. Sep 12 17:49:36.753398 ntpd[1854]: Listen normally on 5 eth0 172.31.28.120:123 Sep 12 17:49:36.753890 ntpd[1854]: 12 Sep 17:49:36 ntpd[1854]: Listen normally on 5 eth0 172.31.28.120:123 Sep 12 17:49:36.753890 ntpd[1854]: 12 Sep 17:49:36 ntpd[1854]: Listen normally on 6 eth0 [fe80::4b3:77ff:fe1b:172b%2]:123 Sep 12 17:49:36.753465 ntpd[1854]: Listen normally on 6 eth0 [fe80::4b3:77ff:fe1b:172b%2]:123 Sep 12 17:49:36.909611 systemd[2177]: Queued start job for default target default.target. Sep 12 17:49:36.918214 systemd[2177]: Created slice app.slice - User Application Slice. Sep 12 17:49:36.918246 systemd[2177]: Reached target paths.target - Paths. Sep 12 17:49:36.918522 systemd[2177]: Reached target timers.target - Timers. Sep 12 17:49:36.920144 systemd[2177]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:49:36.934806 systemd[2177]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:49:36.934957 systemd[2177]: Reached target sockets.target - Sockets. Sep 12 17:49:36.935018 systemd[2177]: Reached target basic.target - Basic System. Sep 12 17:49:36.935085 systemd[2177]: Reached target default.target - Main User Target. Sep 12 17:49:36.935125 systemd[2177]: Startup finished in 218ms. Sep 12 17:49:36.935894 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:49:36.942282 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:49:37.090923 systemd[1]: Started sshd@1-172.31.28.120:22-139.178.68.195:42020.service - OpenSSH per-connection server daemon (139.178.68.195:42020). Sep 12 17:49:37.270470 sshd[2188]: Accepted publickey for core from 139.178.68.195 port 42020 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:37.272646 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:37.277575 kubelet[2162]: E0912 17:49:37.277539 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:49:37.280773 systemd-logind[1859]: New session 2 of user core. Sep 12 17:49:37.284229 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:49:37.284619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:49:37.284815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:49:37.285680 systemd[1]: kubelet.service: Consumed 1.081s CPU time, 268.8M memory peak. Sep 12 17:49:37.408699 sshd[2193]: Connection closed by 139.178.68.195 port 42020 Sep 12 17:49:37.409264 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:37.414280 systemd[1]: sshd@1-172.31.28.120:22-139.178.68.195:42020.service: Deactivated successfully. Sep 12 17:49:37.416556 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:49:37.417593 systemd-logind[1859]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:49:37.419244 systemd-logind[1859]: Removed session 2. Sep 12 17:49:37.440524 systemd[1]: Started sshd@2-172.31.28.120:22-139.178.68.195:42036.service - OpenSSH per-connection server daemon (139.178.68.195:42036). Sep 12 17:49:37.611934 sshd[2199]: Accepted publickey for core from 139.178.68.195 port 42036 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:37.613533 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:37.619101 systemd-logind[1859]: New session 3 of user core. Sep 12 17:49:37.633360 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:49:37.745713 sshd[2202]: Connection closed by 139.178.68.195 port 42036 Sep 12 17:49:37.746248 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:37.752229 systemd-logind[1859]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:49:37.752451 systemd[1]: sshd@2-172.31.28.120:22-139.178.68.195:42036.service: Deactivated successfully. Sep 12 17:49:37.754729 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:49:37.756262 systemd-logind[1859]: Removed session 3. Sep 12 17:49:37.779121 systemd[1]: Started sshd@3-172.31.28.120:22-139.178.68.195:42042.service - OpenSSH per-connection server daemon (139.178.68.195:42042). Sep 12 17:49:37.953000 sshd[2208]: Accepted publickey for core from 139.178.68.195 port 42042 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:37.954059 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:37.959997 systemd-logind[1859]: New session 4 of user core. Sep 12 17:49:37.966266 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:49:38.088739 sshd[2211]: Connection closed by 139.178.68.195 port 42042 Sep 12 17:49:38.089267 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:38.093229 systemd[1]: sshd@3-172.31.28.120:22-139.178.68.195:42042.service: Deactivated successfully. Sep 12 17:49:38.096523 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:49:38.097881 systemd-logind[1859]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:49:38.099637 systemd-logind[1859]: Removed session 4. Sep 12 17:49:38.122288 systemd[1]: Started sshd@4-172.31.28.120:22-139.178.68.195:42048.service - OpenSSH per-connection server daemon (139.178.68.195:42048). Sep 12 17:49:38.293962 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 42048 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:38.295622 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:38.301407 systemd-logind[1859]: New session 5 of user core. Sep 12 17:49:38.310289 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:49:38.424481 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:49:38.424864 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:49:38.442293 sudo[2221]: pam_unix(sudo:session): session closed for user root Sep 12 17:49:38.464517 sshd[2220]: Connection closed by 139.178.68.195 port 42048 Sep 12 17:49:38.465233 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:38.469606 systemd[1]: sshd@4-172.31.28.120:22-139.178.68.195:42048.service: Deactivated successfully. Sep 12 17:49:38.471512 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:49:38.472466 systemd-logind[1859]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:49:38.473790 systemd-logind[1859]: Removed session 5. Sep 12 17:49:38.501332 systemd[1]: Started sshd@5-172.31.28.120:22-139.178.68.195:48716.service - OpenSSH per-connection server daemon (139.178.68.195:48716). Sep 12 17:49:38.679123 sshd[2227]: Accepted publickey for core from 139.178.68.195 port 48716 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:38.680644 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:38.685093 systemd-logind[1859]: New session 6 of user core. Sep 12 17:49:38.692259 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:49:38.791887 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:49:38.792179 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:49:38.799987 sudo[2232]: pam_unix(sudo:session): session closed for user root Sep 12 17:49:38.805690 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:49:38.806105 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:49:38.816698 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:49:38.859571 augenrules[2254]: No rules Sep 12 17:49:38.861051 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:49:38.861423 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:49:38.863973 sudo[2231]: pam_unix(sudo:session): session closed for user root Sep 12 17:49:38.886308 sshd[2230]: Connection closed by 139.178.68.195 port 48716 Sep 12 17:49:38.887130 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:38.892134 systemd[1]: sshd@5-172.31.28.120:22-139.178.68.195:48716.service: Deactivated successfully. Sep 12 17:49:38.893994 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:49:38.895313 systemd-logind[1859]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:49:38.896889 systemd-logind[1859]: Removed session 6. Sep 12 17:49:38.923589 systemd[1]: Started sshd@6-172.31.28.120:22-139.178.68.195:48722.service - OpenSSH per-connection server daemon (139.178.68.195:48722). Sep 12 17:49:39.098477 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 48722 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:39.099937 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:39.105835 systemd-logind[1859]: New session 7 of user core. Sep 12 17:49:39.111298 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:49:39.210877 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:49:39.211167 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:49:39.595674 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:49:39.606548 (dockerd)[2284]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:49:39.927533 dockerd[2284]: time="2025-09-12T17:49:39.927271786Z" level=info msg="Starting up" Sep 12 17:49:39.928563 dockerd[2284]: time="2025-09-12T17:49:39.928532607Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:49:39.940679 dockerd[2284]: time="2025-09-12T17:49:39.940636423Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:49:40.066010 systemd[1]: var-lib-docker-metacopy\x2dcheck262050949-merged.mount: Deactivated successfully. Sep 12 17:49:40.090211 dockerd[2284]: time="2025-09-12T17:49:40.089993725Z" level=info msg="Loading containers: start." Sep 12 17:49:40.103067 kernel: Initializing XFRM netlink socket Sep 12 17:49:40.347947 (udev-worker)[2306]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:49:40.400469 systemd-networkd[1824]: docker0: Link UP Sep 12 17:49:40.406383 dockerd[2284]: time="2025-09-12T17:49:40.406331113Z" level=info msg="Loading containers: done." Sep 12 17:49:40.427018 dockerd[2284]: time="2025-09-12T17:49:40.426953564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:49:40.427245 dockerd[2284]: time="2025-09-12T17:49:40.427079213Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:49:40.427245 dockerd[2284]: time="2025-09-12T17:49:40.427197272Z" level=info msg="Initializing buildkit" Sep 12 17:49:40.460140 dockerd[2284]: time="2025-09-12T17:49:40.460101017Z" level=info msg="Completed buildkit initialization" Sep 12 17:49:40.469290 dockerd[2284]: time="2025-09-12T17:49:40.469072548Z" level=info msg="Daemon has completed initialization" Sep 12 17:49:40.469290 dockerd[2284]: time="2025-09-12T17:49:40.469144945Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:49:40.469559 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:49:41.568382 containerd[1903]: time="2025-09-12T17:49:41.568333830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:49:42.211813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243817957.mount: Deactivated successfully. Sep 12 17:49:43.707494 containerd[1903]: time="2025-09-12T17:49:43.707416258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:43.709086 containerd[1903]: time="2025-09-12T17:49:43.708952438Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:49:43.711178 containerd[1903]: time="2025-09-12T17:49:43.711119420Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:43.716062 containerd[1903]: time="2025-09-12T17:49:43.714895084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:43.716062 containerd[1903]: time="2025-09-12T17:49:43.715950477Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.147575226s" Sep 12 17:49:43.716062 containerd[1903]: time="2025-09-12T17:49:43.715993751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:49:43.716902 containerd[1903]: time="2025-09-12T17:49:43.716865804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:49:45.458721 containerd[1903]: time="2025-09-12T17:49:45.458651389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:45.460905 containerd[1903]: time="2025-09-12T17:49:45.460863346Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:49:45.464068 containerd[1903]: time="2025-09-12T17:49:45.462955212Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:45.466921 containerd[1903]: time="2025-09-12T17:49:45.466879289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:45.467956 containerd[1903]: time="2025-09-12T17:49:45.467919181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.751019233s" Sep 12 17:49:45.468123 containerd[1903]: time="2025-09-12T17:49:45.468103167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:49:45.469195 containerd[1903]: time="2025-09-12T17:49:45.469156900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:49:46.825722 containerd[1903]: time="2025-09-12T17:49:46.825666539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:46.827805 containerd[1903]: time="2025-09-12T17:49:46.827746690Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:49:46.830842 containerd[1903]: time="2025-09-12T17:49:46.830782870Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:46.834473 containerd[1903]: time="2025-09-12T17:49:46.834398532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:46.835629 containerd[1903]: time="2025-09-12T17:49:46.835490871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.366294529s" Sep 12 17:49:46.835629 containerd[1903]: time="2025-09-12T17:49:46.835525369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:49:46.836175 containerd[1903]: time="2025-09-12T17:49:46.836143820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:49:47.487892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:49:47.490788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:49:47.760328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:49:47.771723 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:49:47.858915 kubelet[2573]: E0912 17:49:47.858814 2573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:49:47.866476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:49:47.866845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:49:47.867575 systemd[1]: kubelet.service: Consumed 215ms CPU time, 108.7M memory peak. Sep 12 17:49:48.062810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057071622.mount: Deactivated successfully. Sep 12 17:49:48.671560 containerd[1903]: time="2025-09-12T17:49:48.671485912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:48.673415 containerd[1903]: time="2025-09-12T17:49:48.673369646Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:49:48.676127 containerd[1903]: time="2025-09-12T17:49:48.676059323Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:48.679654 containerd[1903]: time="2025-09-12T17:49:48.679058101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:48.679654 containerd[1903]: time="2025-09-12T17:49:48.679527225Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.843354783s" Sep 12 17:49:48.679654 containerd[1903]: time="2025-09-12T17:49:48.679556113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:49:48.679967 containerd[1903]: time="2025-09-12T17:49:48.679950673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:49:49.200807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252849566.mount: Deactivated successfully. Sep 12 17:49:50.367352 containerd[1903]: time="2025-09-12T17:49:50.367273954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:50.369410 containerd[1903]: time="2025-09-12T17:49:50.369362775Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:49:50.372480 containerd[1903]: time="2025-09-12T17:49:50.372407760Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:50.377054 containerd[1903]: time="2025-09-12T17:49:50.376999042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:50.378183 containerd[1903]: time="2025-09-12T17:49:50.377869415Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.697797075s" Sep 12 17:49:50.378183 containerd[1903]: time="2025-09-12T17:49:50.377907599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:49:50.378752 containerd[1903]: time="2025-09-12T17:49:50.378680872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:49:50.889216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539290411.mount: Deactivated successfully. Sep 12 17:49:50.907866 containerd[1903]: time="2025-09-12T17:49:50.907776293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:49:50.909647 containerd[1903]: time="2025-09-12T17:49:50.909558777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:49:50.915602 containerd[1903]: time="2025-09-12T17:49:50.915511118Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:49:50.919462 containerd[1903]: time="2025-09-12T17:49:50.919412397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:49:50.920703 containerd[1903]: time="2025-09-12T17:49:50.920209197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.481792ms" Sep 12 17:49:50.920703 containerd[1903]: time="2025-09-12T17:49:50.920249306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:49:50.920973 containerd[1903]: time="2025-09-12T17:49:50.920942411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:49:51.470172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270154991.mount: Deactivated successfully. Sep 12 17:49:53.610055 containerd[1903]: time="2025-09-12T17:49:53.609966980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:53.612073 containerd[1903]: time="2025-09-12T17:49:53.611930998Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:49:53.614856 containerd[1903]: time="2025-09-12T17:49:53.614788533Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:53.618662 containerd[1903]: time="2025-09-12T17:49:53.618606066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:53.619724 containerd[1903]: time="2025-09-12T17:49:53.619557400Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.698586145s" Sep 12 17:49:53.619724 containerd[1903]: time="2025-09-12T17:49:53.619589270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:49:56.342133 systemd-resolved[1770]: Clock change detected. Flushing caches. Sep 12 17:49:58.628082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:49:58.628339 systemd[1]: kubelet.service: Consumed 215ms CPU time, 108.7M memory peak. Sep 12 17:49:58.631079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:49:58.663818 systemd[1]: Reload requested from client PID 2725 ('systemctl') (unit session-7.scope)... Sep 12 17:49:58.663837 systemd[1]: Reloading... Sep 12 17:49:58.775108 zram_generator::config[2773]: No configuration found. Sep 12 17:49:59.048677 systemd[1]: Reloading finished in 384 ms. Sep 12 17:49:59.117561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:49:59.117672 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:49:59.118105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:49:59.118169 systemd[1]: kubelet.service: Consumed 128ms CPU time, 98.1M memory peak. Sep 12 17:49:59.120394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:49:59.407234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:49:59.416534 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:49:59.463633 kubelet[2833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:49:59.463633 kubelet[2833]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:49:59.463633 kubelet[2833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:49:59.468478 kubelet[2833]: I0912 17:49:59.468400 2833 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:49:59.992402 kubelet[2833]: I0912 17:49:59.992356 2833 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:49:59.992402 kubelet[2833]: I0912 17:49:59.992386 2833 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:49:59.992680 kubelet[2833]: I0912 17:49:59.992658 2833 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:50:00.158863 kubelet[2833]: I0912 17:50:00.158709 2833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:50:00.160581 kubelet[2833]: E0912 17:50:00.160533 2833 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:50:00.194299 kubelet[2833]: I0912 17:50:00.194264 2833 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:50:00.211221 kubelet[2833]: I0912 17:50:00.210832 2833 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:50:00.218694 kubelet[2833]: I0912 17:50:00.218630 2833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:50:00.223251 kubelet[2833]: I0912 17:50:00.218845 2833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:50:00.227329 kubelet[2833]: I0912 17:50:00.227279 2833 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:50:00.227329 kubelet[2833]: I0912 17:50:00.227329 2833 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:50:00.228454 kubelet[2833]: I0912 17:50:00.227476 2833 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:50:00.236092 kubelet[2833]: I0912 17:50:00.235939 2833 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:50:00.237467 kubelet[2833]: I0912 17:50:00.236236 2833 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:50:00.238277 kubelet[2833]: I0912 17:50:00.238243 2833 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:50:00.240724 kubelet[2833]: I0912 17:50:00.240339 2833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:50:00.243362 kubelet[2833]: E0912 17:50:00.243253 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-120&limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:50:00.253267 kubelet[2833]: I0912 17:50:00.253244 2833 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:50:00.253924 kubelet[2833]: I0912 17:50:00.253896 2833 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:50:00.256102 kubelet[2833]: W0912 17:50:00.256081 2833 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:50:00.256869 kubelet[2833]: E0912 17:50:00.256828 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:50:00.261999 kubelet[2833]: I0912 17:50:00.261942 2833 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:50:00.263323 kubelet[2833]: I0912 17:50:00.262032 2833 server.go:1289] "Started kubelet" Sep 12 17:50:00.267811 kubelet[2833]: I0912 17:50:00.267775 2833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:50:00.278223 kubelet[2833]: E0912 17:50:00.269418 2833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-120.18649a4ab6885329 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-120,UID:ip-172-31-28-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-120,},FirstTimestamp:2025-09-12 17:50:00.261980969 +0000 UTC m=+0.841525857,LastTimestamp:2025-09-12 17:50:00.261980969 +0000 UTC m=+0.841525857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-120,}" Sep 12 17:50:00.278223 kubelet[2833]: I0912 17:50:00.275803 2833 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:50:00.278223 kubelet[2833]: I0912 17:50:00.276595 2833 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:50:00.279315 kubelet[2833]: I0912 17:50:00.278921 2833 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:50:00.281798 kubelet[2833]: I0912 17:50:00.281738 2833 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:50:00.283142 kubelet[2833]: E0912 17:50:00.282194 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:00.296877 kubelet[2833]: I0912 17:50:00.284265 2833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:50:00.297370 kubelet[2833]: I0912 17:50:00.297347 2833 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:50:00.297531 kubelet[2833]: I0912 17:50:00.293637 2833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:50:00.299233 kubelet[2833]: E0912 17:50:00.299014 2833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": dial tcp 172.31.28.120:6443: connect: connection refused" interval="200ms" Sep 12 17:50:00.302142 kubelet[2833]: I0912 17:50:00.302114 2833 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:50:00.302778 kubelet[2833]: I0912 17:50:00.302196 2833 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:50:00.303176 kubelet[2833]: E0912 17:50:00.303147 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:50:00.305003 kubelet[2833]: I0912 17:50:00.303954 2833 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:50:00.305003 kubelet[2833]: I0912 17:50:00.304130 2833 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:50:00.308193 kubelet[2833]: E0912 17:50:00.308045 2833 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:50:00.310654 kubelet[2833]: I0912 17:50:00.308209 2833 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:50:00.324808 kubelet[2833]: I0912 17:50:00.324094 2833 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:50:00.324808 kubelet[2833]: I0912 17:50:00.324129 2833 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:50:00.324808 kubelet[2833]: I0912 17:50:00.324154 2833 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:50:00.324808 kubelet[2833]: I0912 17:50:00.324163 2833 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:50:00.324808 kubelet[2833]: E0912 17:50:00.324212 2833 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:50:00.330227 kubelet[2833]: E0912 17:50:00.330198 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:50:00.340990 kubelet[2833]: I0912 17:50:00.340921 2833 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:50:00.340990 kubelet[2833]: I0912 17:50:00.340939 2833 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:50:00.340990 kubelet[2833]: I0912 17:50:00.340966 2833 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:50:00.347469 kubelet[2833]: I0912 17:50:00.346093 2833 policy_none.go:49] "None policy: Start" Sep 12 17:50:00.347469 kubelet[2833]: I0912 17:50:00.346125 2833 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:50:00.347469 kubelet[2833]: I0912 17:50:00.346138 2833 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:50:00.356457 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:50:00.370827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:50:00.374265 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:50:00.385370 kubelet[2833]: E0912 17:50:00.385119 2833 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:50:00.385370 kubelet[2833]: I0912 17:50:00.385321 2833 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:50:00.385370 kubelet[2833]: I0912 17:50:00.385333 2833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:50:00.385926 kubelet[2833]: I0912 17:50:00.385905 2833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:50:00.388239 kubelet[2833]: E0912 17:50:00.388212 2833 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:50:00.388320 kubelet[2833]: E0912 17:50:00.388252 2833 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-120\" not found" Sep 12 17:50:00.438550 systemd[1]: Created slice kubepods-burstable-pod23c12640e1984f38145d84ef4311f9f3.slice - libcontainer container kubepods-burstable-pod23c12640e1984f38145d84ef4311f9f3.slice. Sep 12 17:50:00.447244 kubelet[2833]: E0912 17:50:00.447185 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:00.450824 systemd[1]: Created slice kubepods-burstable-pod34e7086ed7dc4c7854daa088bdaee715.slice - libcontainer container kubepods-burstable-pod34e7086ed7dc4c7854daa088bdaee715.slice. Sep 12 17:50:00.462674 kubelet[2833]: E0912 17:50:00.462610 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:00.467758 systemd[1]: Created slice kubepods-burstable-podabb7aee6518f36f3d0eec03de73b23f1.slice - libcontainer container kubepods-burstable-podabb7aee6518f36f3d0eec03de73b23f1.slice. Sep 12 17:50:00.470184 kubelet[2833]: E0912 17:50:00.470155 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:00.488196 kubelet[2833]: I0912 17:50:00.488171 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:00.488806 kubelet[2833]: E0912 17:50:00.488773 2833 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.120:6443/api/v1/nodes\": dial tcp 172.31.28.120:6443: connect: connection refused" node="ip-172-31-28-120" Sep 12 17:50:00.500653 kubelet[2833]: E0912 17:50:00.500523 2833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": dial tcp 172.31.28.120:6443: connect: connection refused" interval="400ms" Sep 12 17:50:00.603505 kubelet[2833]: I0912 17:50:00.603297 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:00.603505 kubelet[2833]: I0912 17:50:00.603340 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:00.603505 kubelet[2833]: I0912 17:50:00.603359 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:00.603505 kubelet[2833]: I0912 17:50:00.603381 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb7aee6518f36f3d0eec03de73b23f1-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-120\" (UID: \"abb7aee6518f36f3d0eec03de73b23f1\") " pod="kube-system/kube-scheduler-ip-172-31-28-120" Sep 12 17:50:00.603505 kubelet[2833]: I0912 17:50:00.603397 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:00.603746 kubelet[2833]: I0912 17:50:00.603412 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:00.603746 kubelet[2833]: I0912 17:50:00.603427 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:00.603746 kubelet[2833]: I0912 17:50:00.603441 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:00.603746 kubelet[2833]: I0912 17:50:00.603456 2833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-ca-certs\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:00.693295 kubelet[2833]: I0912 17:50:00.693258 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:00.693644 kubelet[2833]: E0912 17:50:00.693614 2833 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.120:6443/api/v1/nodes\": dial tcp 172.31.28.120:6443: connect: connection refused" node="ip-172-31-28-120" Sep 12 17:50:00.749178 containerd[1903]: time="2025-09-12T17:50:00.749126746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-120,Uid:23c12640e1984f38145d84ef4311f9f3,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:00.765413 containerd[1903]: time="2025-09-12T17:50:00.765195012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-120,Uid:34e7086ed7dc4c7854daa088bdaee715,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:00.775089 containerd[1903]: time="2025-09-12T17:50:00.775029917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-120,Uid:abb7aee6518f36f3d0eec03de73b23f1,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:00.901382 kubelet[2833]: E0912 17:50:00.901300 2833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": dial tcp 172.31.28.120:6443: connect: connection refused" interval="800ms" Sep 12 17:50:00.924735 containerd[1903]: time="2025-09-12T17:50:00.924682245Z" level=info msg="connecting to shim 09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477" address="unix:///run/containerd/s/bfd02d89c92ff2b34e58e8dfbc9a28af32e40f39952a2dc6137c18ef6c5728fc" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:00.928587 containerd[1903]: time="2025-09-12T17:50:00.928544962Z" level=info msg="connecting to shim 39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009" address="unix:///run/containerd/s/4eccce1c84bc67c5d166b8214196eb387b9f082d156f2a86cebce798d7de96d8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:00.936258 containerd[1903]: time="2025-09-12T17:50:00.936200978Z" level=info msg="connecting to shim 97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369" address="unix:///run/containerd/s/8dcbbbacedb7060107f6f7b9618d8e377a0cf7822e513e3de0a544206202a135" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:01.075972 systemd[1]: Started cri-containerd-09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477.scope - libcontainer container 09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477. Sep 12 17:50:01.089551 systemd[1]: Started cri-containerd-39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009.scope - libcontainer container 39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009. Sep 12 17:50:01.096808 kubelet[2833]: I0912 17:50:01.096450 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:01.096808 kubelet[2833]: E0912 17:50:01.096775 2833 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.120:6443/api/v1/nodes\": dial tcp 172.31.28.120:6443: connect: connection refused" node="ip-172-31-28-120" Sep 12 17:50:01.098744 systemd[1]: Started cri-containerd-97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369.scope - libcontainer container 97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369. Sep 12 17:50:01.366766 kubelet[2833]: E0912 17:50:01.366641 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:50:01.367184 containerd[1903]: time="2025-09-12T17:50:01.366928659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-120,Uid:34e7086ed7dc4c7854daa088bdaee715,Namespace:kube-system,Attempt:0,} returns sandbox id \"39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009\"" Sep 12 17:50:01.409238 containerd[1903]: time="2025-09-12T17:50:01.409136761Z" level=info msg="CreateContainer within sandbox \"39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:50:01.440084 containerd[1903]: time="2025-09-12T17:50:01.439987252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-120,Uid:23c12640e1984f38145d84ef4311f9f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369\"" Sep 12 17:50:01.481804 kubelet[2833]: E0912 17:50:01.481701 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-120&limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:50:01.482403 containerd[1903]: time="2025-09-12T17:50:01.481723201Z" level=info msg="CreateContainer within sandbox \"97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:50:01.505207 containerd[1903]: time="2025-09-12T17:50:01.505153896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-120,Uid:abb7aee6518f36f3d0eec03de73b23f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477\"" Sep 12 17:50:01.506664 containerd[1903]: time="2025-09-12T17:50:01.506618747Z" level=info msg="Container 0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:01.530936 containerd[1903]: time="2025-09-12T17:50:01.530846636Z" level=info msg="CreateContainer within sandbox \"09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:50:01.534471 containerd[1903]: time="2025-09-12T17:50:01.534302595Z" level=info msg="Container 46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:01.570180 containerd[1903]: time="2025-09-12T17:50:01.570122975Z" level=info msg="Container 36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:01.582894 containerd[1903]: time="2025-09-12T17:50:01.582733171Z" level=info msg="CreateContainer within sandbox \"97a64e7cacd301fa4aed1b5db9d8bd9ebd1a8442815ab100a27f971c7e679369\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f\"" Sep 12 17:50:01.589145 containerd[1903]: time="2025-09-12T17:50:01.589093901Z" level=info msg="StartContainer for \"46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f\"" Sep 12 17:50:01.590638 containerd[1903]: time="2025-09-12T17:50:01.590573500Z" level=info msg="connecting to shim 46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f" address="unix:///run/containerd/s/8dcbbbacedb7060107f6f7b9618d8e377a0cf7822e513e3de0a544206202a135" protocol=ttrpc version=3 Sep 12 17:50:01.605093 containerd[1903]: time="2025-09-12T17:50:01.605020641Z" level=info msg="CreateContainer within sandbox \"39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\"" Sep 12 17:50:01.608118 containerd[1903]: time="2025-09-12T17:50:01.606759748Z" level=info msg="CreateContainer within sandbox \"09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\"" Sep 12 17:50:01.609294 containerd[1903]: time="2025-09-12T17:50:01.609256578Z" level=info msg="StartContainer for \"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\"" Sep 12 17:50:01.610230 containerd[1903]: time="2025-09-12T17:50:01.610196132Z" level=info msg="StartContainer for \"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\"" Sep 12 17:50:01.613446 containerd[1903]: time="2025-09-12T17:50:01.613404723Z" level=info msg="connecting to shim 0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f" address="unix:///run/containerd/s/4eccce1c84bc67c5d166b8214196eb387b9f082d156f2a86cebce798d7de96d8" protocol=ttrpc version=3 Sep 12 17:50:01.618326 containerd[1903]: time="2025-09-12T17:50:01.618204440Z" level=info msg="connecting to shim 36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2" address="unix:///run/containerd/s/bfd02d89c92ff2b34e58e8dfbc9a28af32e40f39952a2dc6137c18ef6c5728fc" protocol=ttrpc version=3 Sep 12 17:50:01.679463 systemd[1]: Started cri-containerd-46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f.scope - libcontainer container 46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f. Sep 12 17:50:01.706076 kubelet[2833]: E0912 17:50:01.706011 2833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": dial tcp 172.31.28.120:6443: connect: connection refused" interval="1.6s" Sep 12 17:50:01.768374 kubelet[2833]: E0912 17:50:01.768318 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:50:01.769431 systemd[1]: Started cri-containerd-0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f.scope - libcontainer container 0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f. Sep 12 17:50:01.861718 kubelet[2833]: E0912 17:50:01.855296 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:50:01.906083 systemd[1]: Started cri-containerd-36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2.scope - libcontainer container 36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2. Sep 12 17:50:01.918133 kubelet[2833]: I0912 17:50:01.918098 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:01.923095 kubelet[2833]: E0912 17:50:01.922798 2833 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.120:6443/api/v1/nodes\": dial tcp 172.31.28.120:6443: connect: connection refused" node="ip-172-31-28-120" Sep 12 17:50:02.200875 containerd[1903]: time="2025-09-12T17:50:02.200727263Z" level=info msg="StartContainer for \"46de48209fcca6ea5b233540f8ce9bef4e228204f4c5677f6b9581cabff2561f\" returns successfully" Sep 12 17:50:02.259654 containerd[1903]: time="2025-09-12T17:50:02.259511300Z" level=info msg="StartContainer for \"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\" returns successfully" Sep 12 17:50:02.262021 containerd[1903]: time="2025-09-12T17:50:02.261925071Z" level=info msg="StartContainer for \"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\" returns successfully" Sep 12 17:50:02.325423 kubelet[2833]: E0912 17:50:02.325380 2833 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:50:02.368949 kubelet[2833]: E0912 17:50:02.368909 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:02.408051 kubelet[2833]: E0912 17:50:02.404625 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:02.408051 kubelet[2833]: E0912 17:50:02.404841 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:03.307826 kubelet[2833]: E0912 17:50:03.307725 2833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": dial tcp 172.31.28.120:6443: connect: connection refused" interval="3.2s" Sep 12 17:50:03.400027 kubelet[2833]: E0912 17:50:03.399970 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:03.401685 kubelet[2833]: E0912 17:50:03.400857 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:03.528521 kubelet[2833]: I0912 17:50:03.528462 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:03.528925 kubelet[2833]: E0912 17:50:03.528895 2833 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.120:6443/api/v1/nodes\": dial tcp 172.31.28.120:6443: connect: connection refused" node="ip-172-31-28-120" Sep 12 17:50:03.543119 kubelet[2833]: E0912 17:50:03.542818 2833 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-120&limit=500&resourceVersion=0\": dial tcp 172.31.28.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:50:03.619977 kubelet[2833]: E0912 17:50:03.619865 2833 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:05.777894 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:50:06.340099 kubelet[2833]: E0912 17:50:06.340050 2833 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-28-120" not found Sep 12 17:50:06.523963 kubelet[2833]: E0912 17:50:06.523922 2833 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-120\" not found" node="ip-172-31-28-120" Sep 12 17:50:06.712408 kubelet[2833]: E0912 17:50:06.712360 2833 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-28-120" not found Sep 12 17:50:06.732946 kubelet[2833]: I0912 17:50:06.732914 2833 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:06.758133 kubelet[2833]: I0912 17:50:06.758048 2833 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-120" Sep 12 17:50:06.758133 kubelet[2833]: E0912 17:50:06.758114 2833 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-120\": node \"ip-172-31-28-120\" not found" Sep 12 17:50:06.801754 kubelet[2833]: E0912 17:50:06.801700 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:06.902896 kubelet[2833]: E0912 17:50:06.902847 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.004115 kubelet[2833]: E0912 17:50:07.003202 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.105442 kubelet[2833]: E0912 17:50:07.105311 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.206157 kubelet[2833]: E0912 17:50:07.206079 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.307382 kubelet[2833]: E0912 17:50:07.307016 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.407584 kubelet[2833]: E0912 17:50:07.407539 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.509977 kubelet[2833]: E0912 17:50:07.509931 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.610473 kubelet[2833]: E0912 17:50:07.610154 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.712827 kubelet[2833]: E0912 17:50:07.712653 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.817467 kubelet[2833]: E0912 17:50:07.817428 2833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-120\" not found" Sep 12 17:50:07.894372 kubelet[2833]: I0912 17:50:07.894311 2833 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-120" Sep 12 17:50:07.921037 kubelet[2833]: I0912 17:50:07.920997 2833 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:07.931394 kubelet[2833]: I0912 17:50:07.931357 2833 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:08.253665 kubelet[2833]: I0912 17:50:08.253531 2833 apiserver.go:52] "Watching apiserver" Sep 12 17:50:08.302686 kubelet[2833]: I0912 17:50:08.302647 2833 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:50:08.896595 systemd[1]: Reload requested from client PID 3115 ('systemctl') (unit session-7.scope)... Sep 12 17:50:08.896612 systemd[1]: Reloading... Sep 12 17:50:09.061091 zram_generator::config[3168]: No configuration found. Sep 12 17:50:09.353117 systemd[1]: Reloading finished in 456 ms. Sep 12 17:50:09.390682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:50:09.404439 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:50:09.404747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:50:09.404842 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 127.7M memory peak. Sep 12 17:50:09.407656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:50:09.700217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:50:09.715350 (kubelet)[3219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:50:09.801850 kubelet[3219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:50:09.801850 kubelet[3219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:50:09.801850 kubelet[3219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:50:09.804147 kubelet[3219]: I0912 17:50:09.804094 3219 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:50:09.812017 kubelet[3219]: I0912 17:50:09.811967 3219 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:50:09.814196 kubelet[3219]: I0912 17:50:09.812162 3219 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:50:09.814196 kubelet[3219]: I0912 17:50:09.812404 3219 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:50:09.814196 kubelet[3219]: I0912 17:50:09.813762 3219 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:50:09.825666 kubelet[3219]: I0912 17:50:09.825628 3219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:50:09.832602 kubelet[3219]: I0912 17:50:09.832566 3219 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:50:09.836861 kubelet[3219]: I0912 17:50:09.836828 3219 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:50:09.837182 kubelet[3219]: I0912 17:50:09.837143 3219 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:50:09.837381 kubelet[3219]: I0912 17:50:09.837181 3219 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:50:09.837555 kubelet[3219]: I0912 17:50:09.837391 3219 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:50:09.837555 kubelet[3219]: I0912 17:50:09.837405 3219 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:50:09.837555 kubelet[3219]: I0912 17:50:09.837465 3219 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:50:09.837779 kubelet[3219]: I0912 17:50:09.837658 3219 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:50:09.837779 kubelet[3219]: I0912 17:50:09.837675 3219 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:50:09.838402 kubelet[3219]: I0912 17:50:09.838369 3219 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:50:09.840176 kubelet[3219]: I0912 17:50:09.838425 3219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:50:09.840694 kubelet[3219]: I0912 17:50:09.840676 3219 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:50:09.841575 kubelet[3219]: I0912 17:50:09.841557 3219 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:50:09.853275 kubelet[3219]: I0912 17:50:09.853252 3219 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:50:09.853475 kubelet[3219]: I0912 17:50:09.853465 3219 server.go:1289] "Started kubelet" Sep 12 17:50:09.855124 kubelet[3219]: I0912 17:50:09.854386 3219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:50:09.855571 kubelet[3219]: I0912 17:50:09.855553 3219 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:50:09.855754 kubelet[3219]: I0912 17:50:09.855732 3219 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:50:09.864541 kubelet[3219]: I0912 17:50:09.864513 3219 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:50:09.866875 kubelet[3219]: I0912 17:50:09.865465 3219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:50:09.879463 kubelet[3219]: I0912 17:50:09.865607 3219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:50:09.879586 kubelet[3219]: I0912 17:50:09.879548 3219 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:50:09.884252 kubelet[3219]: I0912 17:50:09.884217 3219 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:50:09.884388 kubelet[3219]: I0912 17:50:09.884375 3219 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:50:09.885740 kubelet[3219]: I0912 17:50:09.885698 3219 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:50:09.888300 kubelet[3219]: I0912 17:50:09.888273 3219 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:50:09.888464 kubelet[3219]: I0912 17:50:09.888454 3219 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:50:09.888549 kubelet[3219]: I0912 17:50:09.888539 3219 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:50:09.888606 kubelet[3219]: I0912 17:50:09.888599 3219 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:50:09.888712 kubelet[3219]: E0912 17:50:09.888696 3219 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:50:09.888790 kubelet[3219]: I0912 17:50:09.888773 3219 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:50:09.888906 kubelet[3219]: I0912 17:50:09.888885 3219 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:50:09.898374 kubelet[3219]: E0912 17:50:09.898268 3219 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:50:09.898789 kubelet[3219]: I0912 17:50:09.898765 3219 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:50:09.944745 sudo[3252]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:50:09.945769 sudo[3252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:50:09.982879 kubelet[3219]: I0912 17:50:09.982777 3219 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:50:09.982879 kubelet[3219]: I0912 17:50:09.982825 3219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:50:09.982879 kubelet[3219]: I0912 17:50:09.982849 3219 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983247 3219 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983263 3219 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983286 3219 policy_none.go:49] "None policy: Start" Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983298 3219 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983312 3219 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:50:09.983714 kubelet[3219]: I0912 17:50:09.983510 3219 state_mem.go:75] "Updated machine memory state" Sep 12 17:50:09.990158 kubelet[3219]: E0912 17:50:09.990127 3219 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:50:09.992085 kubelet[3219]: E0912 17:50:09.991899 3219 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:50:09.993848 kubelet[3219]: I0912 17:50:09.993753 3219 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:50:09.993848 kubelet[3219]: I0912 17:50:09.993771 3219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:50:09.998000 kubelet[3219]: I0912 17:50:09.997778 3219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:50:10.002825 kubelet[3219]: E0912 17:50:10.002793 3219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:50:10.111131 kubelet[3219]: I0912 17:50:10.110552 3219 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-120" Sep 12 17:50:10.131741 kubelet[3219]: I0912 17:50:10.131209 3219 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-120" Sep 12 17:50:10.131741 kubelet[3219]: I0912 17:50:10.131303 3219 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-120" Sep 12 17:50:10.192523 kubelet[3219]: I0912 17:50:10.192492 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.193279 kubelet[3219]: I0912 17:50:10.193247 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-120" Sep 12 17:50:10.195898 kubelet[3219]: I0912 17:50:10.195647 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.210158 kubelet[3219]: E0912 17:50:10.209851 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-120\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.212256 kubelet[3219]: E0912 17:50:10.212121 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-120\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-120" Sep 12 17:50:10.212540 kubelet[3219]: E0912 17:50:10.212520 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-120\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.390335 kubelet[3219]: I0912 17:50:10.390261 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.390507 kubelet[3219]: I0912 17:50:10.390365 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-ca-certs\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.390560 kubelet[3219]: I0912 17:50:10.390389 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.390610 kubelet[3219]: I0912 17:50:10.390558 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.390657 kubelet[3219]: I0912 17:50:10.390620 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb7aee6518f36f3d0eec03de73b23f1-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-120\" (UID: \"abb7aee6518f36f3d0eec03de73b23f1\") " pod="kube-system/kube-scheduler-ip-172-31-28-120" Sep 12 17:50:10.390703 kubelet[3219]: I0912 17:50:10.390676 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23c12640e1984f38145d84ef4311f9f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-120\" (UID: \"23c12640e1984f38145d84ef4311f9f3\") " pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.390751 kubelet[3219]: I0912 17:50:10.390703 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.390797 kubelet[3219]: I0912 17:50:10.390759 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.390843 kubelet[3219]: I0912 17:50:10.390790 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34e7086ed7dc4c7854daa088bdaee715-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-120\" (UID: \"34e7086ed7dc4c7854daa088bdaee715\") " pod="kube-system/kube-controller-manager-ip-172-31-28-120" Sep 12 17:50:10.446710 sudo[3252]: pam_unix(sudo:session): session closed for user root Sep 12 17:50:10.840039 kubelet[3219]: I0912 17:50:10.839647 3219 apiserver.go:52] "Watching apiserver" Sep 12 17:50:10.884538 kubelet[3219]: I0912 17:50:10.884463 3219 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:50:10.951520 kubelet[3219]: I0912 17:50:10.951347 3219 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:10.964700 kubelet[3219]: E0912 17:50:10.964580 3219 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-120\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-120" Sep 12 17:50:11.007533 kubelet[3219]: I0912 17:50:11.007405 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-120" podStartSLOduration=4.007387025 podStartE2EDuration="4.007387025s" podCreationTimestamp="2025-09-12 17:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:11.006620511 +0000 UTC m=+1.283152148" watchObservedRunningTime="2025-09-12 17:50:11.007387025 +0000 UTC m=+1.283918656" Sep 12 17:50:11.038033 kubelet[3219]: I0912 17:50:11.037873 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-120" podStartSLOduration=4.037836263 podStartE2EDuration="4.037836263s" podCreationTimestamp="2025-09-12 17:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:11.025864724 +0000 UTC m=+1.302396362" watchObservedRunningTime="2025-09-12 17:50:11.037836263 +0000 UTC m=+1.314367900" Sep 12 17:50:11.055454 kubelet[3219]: I0912 17:50:11.055350 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-120" podStartSLOduration=4.054950437 podStartE2EDuration="4.054950437s" podCreationTimestamp="2025-09-12 17:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:11.03891864 +0000 UTC m=+1.315450277" watchObservedRunningTime="2025-09-12 17:50:11.054950437 +0000 UTC m=+1.331482074" Sep 12 17:50:12.248491 sudo[2267]: pam_unix(sudo:session): session closed for user root Sep 12 17:50:12.270705 sshd[2266]: Connection closed by 139.178.68.195 port 48722 Sep 12 17:50:12.271834 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:12.276553 systemd[1]: sshd@6-172.31.28.120:22-139.178.68.195:48722.service: Deactivated successfully. Sep 12 17:50:12.279816 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:50:12.280125 systemd[1]: session-7.scope: Consumed 5.278s CPU time, 211.2M memory peak. Sep 12 17:50:12.283131 systemd-logind[1859]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:50:12.284437 systemd-logind[1859]: Removed session 7. Sep 12 17:50:13.587550 kubelet[3219]: I0912 17:50:13.587516 3219 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:50:13.588347 kubelet[3219]: I0912 17:50:13.588217 3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:50:13.588408 containerd[1903]: time="2025-09-12T17:50:13.588019458Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:50:14.566581 systemd[1]: Created slice kubepods-besteffort-pod73e5fa01_cd66_4ff8_b98e_e2c0a5bda045.slice - libcontainer container kubepods-besteffort-pod73e5fa01_cd66_4ff8_b98e_e2c0a5bda045.slice. Sep 12 17:50:14.590252 systemd[1]: Created slice kubepods-burstable-podf97f5d22_d7cf_4fd8_8cd8_86bd18c6d460.slice - libcontainer container kubepods-burstable-podf97f5d22_d7cf_4fd8_8cd8_86bd18c6d460.slice. Sep 12 17:50:14.617253 kubelet[3219]: I0912 17:50:14.617209 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-run\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617253 kubelet[3219]: I0912 17:50:14.617259 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-bpf-maps\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617279 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hostproc\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617310 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-xtables-lock\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617337 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-clustermesh-secrets\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617359 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-config-path\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617381 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-net\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.617756 kubelet[3219]: I0912 17:50:14.617410 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73e5fa01-cd66-4ff8-b98e-e2c0a5bda045-xtables-lock\") pod \"kube-proxy-lzkc4\" (UID: \"73e5fa01-cd66-4ff8-b98e-e2c0a5bda045\") " pod="kube-system/kube-proxy-lzkc4" Sep 12 17:50:14.618005 kubelet[3219]: I0912 17:50:14.617438 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73e5fa01-cd66-4ff8-b98e-e2c0a5bda045-lib-modules\") pod \"kube-proxy-lzkc4\" (UID: \"73e5fa01-cd66-4ff8-b98e-e2c0a5bda045\") " pod="kube-system/kube-proxy-lzkc4" Sep 12 17:50:14.618005 kubelet[3219]: I0912 17:50:14.617459 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cni-path\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618005 kubelet[3219]: I0912 17:50:14.617479 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-kernel\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618005 kubelet[3219]: I0912 17:50:14.617517 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59s2\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-kube-api-access-k59s2\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618005 kubelet[3219]: I0912 17:50:14.617545 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73e5fa01-cd66-4ff8-b98e-e2c0a5bda045-kube-proxy\") pod \"kube-proxy-lzkc4\" (UID: \"73e5fa01-cd66-4ff8-b98e-e2c0a5bda045\") " pod="kube-system/kube-proxy-lzkc4" Sep 12 17:50:14.618335 kubelet[3219]: I0912 17:50:14.617566 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ffs8\" (UniqueName: \"kubernetes.io/projected/73e5fa01-cd66-4ff8-b98e-e2c0a5bda045-kube-api-access-5ffs8\") pod \"kube-proxy-lzkc4\" (UID: \"73e5fa01-cd66-4ff8-b98e-e2c0a5bda045\") " pod="kube-system/kube-proxy-lzkc4" Sep 12 17:50:14.618335 kubelet[3219]: I0912 17:50:14.617586 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-cgroup\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618335 kubelet[3219]: I0912 17:50:14.617608 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-etc-cni-netd\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618335 kubelet[3219]: I0912 17:50:14.617627 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-lib-modules\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.618335 kubelet[3219]: I0912 17:50:14.617649 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hubble-tls\") pod \"cilium-nnmb5\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " pod="kube-system/cilium-nnmb5" Sep 12 17:50:14.694687 systemd[1]: Created slice kubepods-besteffort-poded943fbf_2c62_41dd_a9df_ad0b568c95fd.slice - libcontainer container kubepods-besteffort-poded943fbf_2c62_41dd_a9df_ad0b568c95fd.slice. Sep 12 17:50:14.719295 kubelet[3219]: I0912 17:50:14.718483 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gvrxr\" (UID: \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\") " pod="kube-system/cilium-operator-6c4d7847fc-gvrxr" Sep 12 17:50:14.719295 kubelet[3219]: I0912 17:50:14.718555 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwshc\" (UniqueName: \"kubernetes.io/projected/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-kube-api-access-wwshc\") pod \"cilium-operator-6c4d7847fc-gvrxr\" (UID: \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\") " pod="kube-system/cilium-operator-6c4d7847fc-gvrxr" Sep 12 17:50:14.891575 containerd[1903]: time="2025-09-12T17:50:14.891472415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzkc4,Uid:73e5fa01-cd66-4ff8-b98e-e2c0a5bda045,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:14.895781 containerd[1903]: time="2025-09-12T17:50:14.895717242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnmb5,Uid:f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:14.935358 containerd[1903]: time="2025-09-12T17:50:14.935313761Z" level=info msg="connecting to shim 6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171" address="unix:///run/containerd/s/4dceccf3d3f96979d0cbfe6d7bc89baae04cc9aa14da1980440bc64ec7160418" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:14.946443 containerd[1903]: time="2025-09-12T17:50:14.946377190Z" level=info msg="connecting to shim abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:14.963479 systemd[1]: Started cri-containerd-6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171.scope - libcontainer container 6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171. Sep 12 17:50:14.976262 systemd[1]: Started cri-containerd-abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9.scope - libcontainer container abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9. Sep 12 17:50:15.000899 containerd[1903]: time="2025-09-12T17:50:15.000846517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gvrxr,Uid:ed943fbf-2c62-41dd-a9df-ad0b568c95fd,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:15.023691 containerd[1903]: time="2025-09-12T17:50:15.023631991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzkc4,Uid:73e5fa01-cd66-4ff8-b98e-e2c0a5bda045,Namespace:kube-system,Attempt:0,} returns sandbox id \"6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171\"" Sep 12 17:50:15.027244 containerd[1903]: time="2025-09-12T17:50:15.027192354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnmb5,Uid:f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460,Namespace:kube-system,Attempt:0,} returns sandbox id \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\"" Sep 12 17:50:15.029093 containerd[1903]: time="2025-09-12T17:50:15.029046180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:50:15.032804 containerd[1903]: time="2025-09-12T17:50:15.032763273Z" level=info msg="CreateContainer within sandbox \"6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:50:15.061702 containerd[1903]: time="2025-09-12T17:50:15.061641382Z" level=info msg="connecting to shim a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399" address="unix:///run/containerd/s/c01a3a737a2288f271c5148911d0916ae24cd08dce11da35d52a02511bc04b0a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:15.066874 containerd[1903]: time="2025-09-12T17:50:15.066817860Z" level=info msg="Container 30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:15.093549 systemd[1]: Started cri-containerd-a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399.scope - libcontainer container a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399. Sep 12 17:50:15.115702 containerd[1903]: time="2025-09-12T17:50:15.115662072Z" level=info msg="CreateContainer within sandbox \"6079651d2b675dcb9a4f69816f5f7b301f80a251f048f39ea914fa8e697a5171\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0\"" Sep 12 17:50:15.116800 containerd[1903]: time="2025-09-12T17:50:15.116773168Z" level=info msg="StartContainer for \"30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0\"" Sep 12 17:50:15.118276 containerd[1903]: time="2025-09-12T17:50:15.118206444Z" level=info msg="connecting to shim 30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0" address="unix:///run/containerd/s/4dceccf3d3f96979d0cbfe6d7bc89baae04cc9aa14da1980440bc64ec7160418" protocol=ttrpc version=3 Sep 12 17:50:15.148258 systemd[1]: Started cri-containerd-30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0.scope - libcontainer container 30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0. Sep 12 17:50:15.166974 containerd[1903]: time="2025-09-12T17:50:15.166903600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gvrxr,Uid:ed943fbf-2c62-41dd-a9df-ad0b568c95fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\"" Sep 12 17:50:15.210637 containerd[1903]: time="2025-09-12T17:50:15.210532356Z" level=info msg="StartContainer for \"30528b85aa014528fc38cc9791cf95639b5ff7a30c2ce5c9a955720f3256deb0\" returns successfully" Sep 12 17:50:16.015777 kubelet[3219]: I0912 17:50:16.014272 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzkc4" podStartSLOduration=2.01423412 podStartE2EDuration="2.01423412s" podCreationTimestamp="2025-09-12 17:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:16.012904507 +0000 UTC m=+6.289436145" watchObservedRunningTime="2025-09-12 17:50:16.01423412 +0000 UTC m=+6.290765758" Sep 12 17:50:19.393750 update_engine[1860]: I20250912 17:50:19.393684 1860 update_attempter.cc:509] Updating boot flags... Sep 12 17:50:20.380251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860279775.mount: Deactivated successfully. Sep 12 17:50:22.893345 containerd[1903]: time="2025-09-12T17:50:22.893269754Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:50:22.896036 containerd[1903]: time="2025-09-12T17:50:22.895892483Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:50:22.902278 containerd[1903]: time="2025-09-12T17:50:22.898469203Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:50:22.907086 containerd[1903]: time="2025-09-12T17:50:22.906815559Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.877709805s" Sep 12 17:50:22.907086 containerd[1903]: time="2025-09-12T17:50:22.906869725Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:50:22.908376 containerd[1903]: time="2025-09-12T17:50:22.908340967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:50:22.915455 containerd[1903]: time="2025-09-12T17:50:22.915339194Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:50:22.949246 containerd[1903]: time="2025-09-12T17:50:22.949184967Z" level=info msg="Container eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:22.965111 containerd[1903]: time="2025-09-12T17:50:22.963583977Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\"" Sep 12 17:50:22.969007 containerd[1903]: time="2025-09-12T17:50:22.968966600Z" level=info msg="StartContainer for \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\"" Sep 12 17:50:22.972031 containerd[1903]: time="2025-09-12T17:50:22.971982250Z" level=info msg="connecting to shim eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" protocol=ttrpc version=3 Sep 12 17:50:23.037323 systemd[1]: Started cri-containerd-eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac.scope - libcontainer container eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac. Sep 12 17:50:23.080310 containerd[1903]: time="2025-09-12T17:50:23.080260000Z" level=info msg="StartContainer for \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" returns successfully" Sep 12 17:50:23.096899 systemd[1]: cri-containerd-eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac.scope: Deactivated successfully. Sep 12 17:50:23.126139 containerd[1903]: time="2025-09-12T17:50:23.125921651Z" level=info msg="received exit event container_id:\"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" id:\"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" pid:3819 exited_at:{seconds:1757699423 nanos:100273667}" Sep 12 17:50:23.141672 containerd[1903]: time="2025-09-12T17:50:23.141625585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" id:\"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" pid:3819 exited_at:{seconds:1757699423 nanos:100273667}" Sep 12 17:50:23.164375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac-rootfs.mount: Deactivated successfully. Sep 12 17:50:24.068887 containerd[1903]: time="2025-09-12T17:50:24.068831541Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:50:24.085293 containerd[1903]: time="2025-09-12T17:50:24.085246311Z" level=info msg="Container b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:24.097691 containerd[1903]: time="2025-09-12T17:50:24.097642905Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\"" Sep 12 17:50:24.098620 containerd[1903]: time="2025-09-12T17:50:24.098583908Z" level=info msg="StartContainer for \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\"" Sep 12 17:50:24.100828 containerd[1903]: time="2025-09-12T17:50:24.100790173Z" level=info msg="connecting to shim b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" protocol=ttrpc version=3 Sep 12 17:50:24.135120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339681435.mount: Deactivated successfully. Sep 12 17:50:24.152615 systemd[1]: Started cri-containerd-b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5.scope - libcontainer container b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5. Sep 12 17:50:24.210622 containerd[1903]: time="2025-09-12T17:50:24.210578719Z" level=info msg="StartContainer for \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" returns successfully" Sep 12 17:50:24.224787 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:50:24.225191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:50:24.226323 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:50:24.228883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:50:24.231671 systemd[1]: cri-containerd-b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5.scope: Deactivated successfully. Sep 12 17:50:24.234667 containerd[1903]: time="2025-09-12T17:50:24.234632050Z" level=info msg="received exit event container_id:\"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" id:\"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" pid:3868 exited_at:{seconds:1757699424 nanos:234401923}" Sep 12 17:50:24.237631 containerd[1903]: time="2025-09-12T17:50:24.237589319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" id:\"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" pid:3868 exited_at:{seconds:1757699424 nanos:234401923}" Sep 12 17:50:24.272896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:50:25.073024 containerd[1903]: time="2025-09-12T17:50:25.072408536Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:50:25.083460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5-rootfs.mount: Deactivated successfully. Sep 12 17:50:25.102403 containerd[1903]: time="2025-09-12T17:50:25.102333717Z" level=info msg="Container 8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:25.108893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632600887.mount: Deactivated successfully. Sep 12 17:50:25.116810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794050625.mount: Deactivated successfully. Sep 12 17:50:25.132680 containerd[1903]: time="2025-09-12T17:50:25.132638877Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\"" Sep 12 17:50:25.134084 containerd[1903]: time="2025-09-12T17:50:25.133893390Z" level=info msg="StartContainer for \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\"" Sep 12 17:50:25.136555 containerd[1903]: time="2025-09-12T17:50:25.136507009Z" level=info msg="connecting to shim 8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" protocol=ttrpc version=3 Sep 12 17:50:25.164296 systemd[1]: Started cri-containerd-8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be.scope - libcontainer container 8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be. Sep 12 17:50:25.228685 containerd[1903]: time="2025-09-12T17:50:25.228618890Z" level=info msg="StartContainer for \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" returns successfully" Sep 12 17:50:25.239092 systemd[1]: cri-containerd-8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be.scope: Deactivated successfully. Sep 12 17:50:25.239511 systemd[1]: cri-containerd-8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be.scope: Consumed 31ms CPU time, 3.8M memory peak, 1M read from disk. Sep 12 17:50:25.242630 containerd[1903]: time="2025-09-12T17:50:25.241846375Z" level=info msg="received exit event container_id:\"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" id:\"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" pid:3917 exited_at:{seconds:1757699425 nanos:240929849}" Sep 12 17:50:25.242630 containerd[1903]: time="2025-09-12T17:50:25.242213684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" id:\"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" pid:3917 exited_at:{seconds:1757699425 nanos:240929849}" Sep 12 17:50:26.082717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be-rootfs.mount: Deactivated successfully. Sep 12 17:50:26.091326 containerd[1903]: time="2025-09-12T17:50:26.090368658Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:50:26.107090 containerd[1903]: time="2025-09-12T17:50:26.105435771Z" level=info msg="Container f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:26.128328 containerd[1903]: time="2025-09-12T17:50:26.128287579Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\"" Sep 12 17:50:26.130200 containerd[1903]: time="2025-09-12T17:50:26.130162180Z" level=info msg="StartContainer for \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\"" Sep 12 17:50:26.132588 containerd[1903]: time="2025-09-12T17:50:26.132497637Z" level=info msg="connecting to shim f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" protocol=ttrpc version=3 Sep 12 17:50:26.177315 systemd[1]: Started cri-containerd-f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282.scope - libcontainer container f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282. Sep 12 17:50:26.236805 systemd[1]: cri-containerd-f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282.scope: Deactivated successfully. Sep 12 17:50:26.239640 containerd[1903]: time="2025-09-12T17:50:26.239589115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" id:\"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" pid:3960 exited_at:{seconds:1757699426 nanos:237671647}" Sep 12 17:50:26.240871 containerd[1903]: time="2025-09-12T17:50:26.240839005Z" level=info msg="received exit event container_id:\"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" id:\"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" pid:3960 exited_at:{seconds:1757699426 nanos:237671647}" Sep 12 17:50:26.243324 containerd[1903]: time="2025-09-12T17:50:26.243295386Z" level=info msg="StartContainer for \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" returns successfully" Sep 12 17:50:26.442781 containerd[1903]: time="2025-09-12T17:50:26.442713940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:50:26.444405 containerd[1903]: time="2025-09-12T17:50:26.444318174Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:50:26.447608 containerd[1903]: time="2025-09-12T17:50:26.446567689Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:50:26.448119 containerd[1903]: time="2025-09-12T17:50:26.448079840Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.539551232s" Sep 12 17:50:26.448225 containerd[1903]: time="2025-09-12T17:50:26.448127440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:50:26.456426 containerd[1903]: time="2025-09-12T17:50:26.456355673Z" level=info msg="CreateContainer within sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:50:26.489715 containerd[1903]: time="2025-09-12T17:50:26.489649841Z" level=info msg="Container 4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:26.500300 containerd[1903]: time="2025-09-12T17:50:26.500252802Z" level=info msg="CreateContainer within sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\"" Sep 12 17:50:26.500836 containerd[1903]: time="2025-09-12T17:50:26.500816970Z" level=info msg="StartContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\"" Sep 12 17:50:26.502620 containerd[1903]: time="2025-09-12T17:50:26.502555405Z" level=info msg="connecting to shim 4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c" address="unix:///run/containerd/s/c01a3a737a2288f271c5148911d0916ae24cd08dce11da35d52a02511bc04b0a" protocol=ttrpc version=3 Sep 12 17:50:26.528315 systemd[1]: Started cri-containerd-4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c.scope - libcontainer container 4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c. Sep 12 17:50:26.567462 containerd[1903]: time="2025-09-12T17:50:26.567430304Z" level=info msg="StartContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" returns successfully" Sep 12 17:50:27.092362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282-rootfs.mount: Deactivated successfully. Sep 12 17:50:27.130116 containerd[1903]: time="2025-09-12T17:50:27.128396076Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:50:27.178081 containerd[1903]: time="2025-09-12T17:50:27.177248632Z" level=info msg="Container 795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:27.184864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698652427.mount: Deactivated successfully. Sep 12 17:50:27.198143 containerd[1903]: time="2025-09-12T17:50:27.198096827Z" level=info msg="CreateContainer within sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\"" Sep 12 17:50:27.199751 containerd[1903]: time="2025-09-12T17:50:27.199718087Z" level=info msg="StartContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\"" Sep 12 17:50:27.202173 containerd[1903]: time="2025-09-12T17:50:27.202118454Z" level=info msg="connecting to shim 795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975" address="unix:///run/containerd/s/42e6b002bf12dceb6f9b1e22165fc88ef5e5389c6cdf570a98eda3ab952b0190" protocol=ttrpc version=3 Sep 12 17:50:27.263305 systemd[1]: Started cri-containerd-795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975.scope - libcontainer container 795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975. Sep 12 17:50:27.418067 containerd[1903]: time="2025-09-12T17:50:27.418015877Z" level=info msg="StartContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" returns successfully" Sep 12 17:50:27.547665 kubelet[3219]: I0912 17:50:27.547598 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gvrxr" podStartSLOduration=2.26425799 podStartE2EDuration="13.546307305s" podCreationTimestamp="2025-09-12 17:50:14 +0000 UTC" firstStartedPulling="2025-09-12 17:50:15.16903794 +0000 UTC m=+5.445569571" lastFinishedPulling="2025-09-12 17:50:26.451087261 +0000 UTC m=+16.727618886" observedRunningTime="2025-09-12 17:50:27.370337176 +0000 UTC m=+17.646868812" watchObservedRunningTime="2025-09-12 17:50:27.546307305 +0000 UTC m=+17.822838942" Sep 12 17:50:27.744938 containerd[1903]: time="2025-09-12T17:50:27.744559471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" id:\"13a1aced9dadd346228c74f7d5431c54eefc59918e57685f507c343dac998d43\" pid:4064 exited_at:{seconds:1757699427 nanos:743399807}" Sep 12 17:50:27.837981 kubelet[3219]: I0912 17:50:27.837931 3219 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:50:27.918553 systemd[1]: Created slice kubepods-burstable-pod4030bdec_e4a4_445f_8274_a09ecd080f50.slice - libcontainer container kubepods-burstable-pod4030bdec_e4a4_445f_8274_a09ecd080f50.slice. Sep 12 17:50:27.928200 kubelet[3219]: I0912 17:50:27.928033 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4c9\" (UniqueName: \"kubernetes.io/projected/4030bdec-e4a4-445f-8274-a09ecd080f50-kube-api-access-qt4c9\") pod \"coredns-674b8bbfcf-dxn4f\" (UID: \"4030bdec-e4a4-445f-8274-a09ecd080f50\") " pod="kube-system/coredns-674b8bbfcf-dxn4f" Sep 12 17:50:27.928268 systemd[1]: Created slice kubepods-burstable-poda616613f_e9f3_43f3_86aa_6e54bf28ef44.slice - libcontainer container kubepods-burstable-poda616613f_e9f3_43f3_86aa_6e54bf28ef44.slice. Sep 12 17:50:27.928686 kubelet[3219]: I0912 17:50:27.928087 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a616613f-e9f3-43f3-86aa-6e54bf28ef44-config-volume\") pod \"coredns-674b8bbfcf-js5ds\" (UID: \"a616613f-e9f3-43f3-86aa-6e54bf28ef44\") " pod="kube-system/coredns-674b8bbfcf-js5ds" Sep 12 17:50:27.928686 kubelet[3219]: I0912 17:50:27.928659 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4030bdec-e4a4-445f-8274-a09ecd080f50-config-volume\") pod \"coredns-674b8bbfcf-dxn4f\" (UID: \"4030bdec-e4a4-445f-8274-a09ecd080f50\") " pod="kube-system/coredns-674b8bbfcf-dxn4f" Sep 12 17:50:27.928792 kubelet[3219]: I0912 17:50:27.928690 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6fv\" (UniqueName: \"kubernetes.io/projected/a616613f-e9f3-43f3-86aa-6e54bf28ef44-kube-api-access-4h6fv\") pod \"coredns-674b8bbfcf-js5ds\" (UID: \"a616613f-e9f3-43f3-86aa-6e54bf28ef44\") " pod="kube-system/coredns-674b8bbfcf-js5ds" Sep 12 17:50:28.224941 containerd[1903]: time="2025-09-12T17:50:28.224884764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dxn4f,Uid:4030bdec-e4a4-445f-8274-a09ecd080f50,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:28.235133 containerd[1903]: time="2025-09-12T17:50:28.234840848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-js5ds,Uid:a616613f-e9f3-43f3-86aa-6e54bf28ef44,Namespace:kube-system,Attempt:0,}" Sep 12 17:50:30.347360 (udev-worker)[4124]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:50:30.348302 systemd-networkd[1824]: cilium_host: Link UP Sep 12 17:50:30.350757 systemd-networkd[1824]: cilium_net: Link UP Sep 12 17:50:30.350925 systemd-networkd[1824]: cilium_net: Gained carrier Sep 12 17:50:30.351048 systemd-networkd[1824]: cilium_host: Gained carrier Sep 12 17:50:30.352833 (udev-worker)[4162]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:50:30.460299 systemd-networkd[1824]: cilium_net: Gained IPv6LL Sep 12 17:50:30.466931 (udev-worker)[4170]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:50:30.472948 systemd-networkd[1824]: cilium_vxlan: Link UP Sep 12 17:50:30.473009 systemd-networkd[1824]: cilium_vxlan: Gained carrier Sep 12 17:50:31.253206 kernel: NET: Registered PF_ALG protocol family Sep 12 17:50:31.279142 systemd-networkd[1824]: cilium_host: Gained IPv6LL Sep 12 17:50:32.016620 systemd-networkd[1824]: lxc_health: Link UP Sep 12 17:50:32.024620 systemd-networkd[1824]: lxc_health: Gained carrier Sep 12 17:50:32.108681 systemd-networkd[1824]: cilium_vxlan: Gained IPv6LL Sep 12 17:50:32.360592 kernel: eth0: renamed from tmpea279 Sep 12 17:50:32.363967 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:50:32.364254 systemd-networkd[1824]: lxc2fac71838c16: Link UP Sep 12 17:50:32.366635 systemd-networkd[1824]: lxc2fac71838c16: Gained carrier Sep 12 17:50:32.366882 systemd-networkd[1824]: lxcbe4127ea532a: Link UP Sep 12 17:50:32.372449 kernel: eth0: renamed from tmp1039e Sep 12 17:50:32.374623 systemd-networkd[1824]: lxcbe4127ea532a: Gained carrier Sep 12 17:50:32.936250 kubelet[3219]: I0912 17:50:32.936183 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nnmb5" podStartSLOduration=11.056311786 podStartE2EDuration="18.936163531s" podCreationTimestamp="2025-09-12 17:50:14 +0000 UTC" firstStartedPulling="2025-09-12 17:50:15.028298484 +0000 UTC m=+5.304830113" lastFinishedPulling="2025-09-12 17:50:22.908150235 +0000 UTC m=+13.184681858" observedRunningTime="2025-09-12 17:50:28.135866106 +0000 UTC m=+18.412397742" watchObservedRunningTime="2025-09-12 17:50:32.936163531 +0000 UTC m=+23.212695169" Sep 12 17:50:33.580348 systemd-networkd[1824]: lxc2fac71838c16: Gained IPv6LL Sep 12 17:50:33.580685 systemd-networkd[1824]: lxc_health: Gained IPv6LL Sep 12 17:50:33.836386 systemd-networkd[1824]: lxcbe4127ea532a: Gained IPv6LL Sep 12 17:50:36.342106 ntpd[1854]: Listen normally on 7 cilium_host 192.168.0.244:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 7 cilium_host 192.168.0.244:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 8 cilium_net [fe80::1c5a:cff:fea9:58b0%4]:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 9 cilium_host [fe80::50aa:b0ff:fe7d:5aca%5]:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 10 cilium_vxlan [fe80::b85a:5bff:fee1:8c33%6]:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 11 lxc_health [fe80::40e1:70ff:feff:697a%8]:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 12 lxc2fac71838c16 [fe80::7481:96ff:fea8:eef6%10]:123 Sep 12 17:50:36.342883 ntpd[1854]: 12 Sep 17:50:36 ntpd[1854]: Listen normally on 13 lxcbe4127ea532a [fe80::2c5e:4bff:fef6:f94b%12]:123 Sep 12 17:50:36.342197 ntpd[1854]: Listen normally on 8 cilium_net [fe80::1c5a:cff:fea9:58b0%4]:123 Sep 12 17:50:36.342250 ntpd[1854]: Listen normally on 9 cilium_host [fe80::50aa:b0ff:fe7d:5aca%5]:123 Sep 12 17:50:36.342290 ntpd[1854]: Listen normally on 10 cilium_vxlan [fe80::b85a:5bff:fee1:8c33%6]:123 Sep 12 17:50:36.342327 ntpd[1854]: Listen normally on 11 lxc_health [fe80::40e1:70ff:feff:697a%8]:123 Sep 12 17:50:36.342364 ntpd[1854]: Listen normally on 12 lxc2fac71838c16 [fe80::7481:96ff:fea8:eef6%10]:123 Sep 12 17:50:36.342401 ntpd[1854]: Listen normally on 13 lxcbe4127ea532a [fe80::2c5e:4bff:fef6:f94b%12]:123 Sep 12 17:50:36.955242 containerd[1903]: time="2025-09-12T17:50:36.955188574Z" level=info msg="connecting to shim 1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81" address="unix:///run/containerd/s/742bb0140b86685d9a1846cf532acc1bb95904f09e9c71ef49a13bfd9a63f896" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:37.020829 containerd[1903]: time="2025-09-12T17:50:37.020721506Z" level=info msg="connecting to shim ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9" address="unix:///run/containerd/s/c3af82bf412bdb2af8d712bb3b12ef3e239b7af02b554f02bb0fb6466106707c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:50:37.030278 systemd[1]: Started cri-containerd-1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81.scope - libcontainer container 1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81. Sep 12 17:50:37.081322 systemd[1]: Started cri-containerd-ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9.scope - libcontainer container ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9. Sep 12 17:50:37.178762 containerd[1903]: time="2025-09-12T17:50:37.178718088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-js5ds,Uid:a616613f-e9f3-43f3-86aa-6e54bf28ef44,Namespace:kube-system,Attempt:0,} returns sandbox id \"1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81\"" Sep 12 17:50:37.220210 containerd[1903]: time="2025-09-12T17:50:37.218820421Z" level=info msg="CreateContainer within sandbox \"1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:50:37.236088 containerd[1903]: time="2025-09-12T17:50:37.236018649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dxn4f,Uid:4030bdec-e4a4-445f-8274-a09ecd080f50,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9\"" Sep 12 17:50:37.246348 containerd[1903]: time="2025-09-12T17:50:37.244724976Z" level=info msg="CreateContainer within sandbox \"ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:50:37.253236 containerd[1903]: time="2025-09-12T17:50:37.253187940Z" level=info msg="Container 688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:37.265229 containerd[1903]: time="2025-09-12T17:50:37.265183103Z" level=info msg="Container 5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:37.270896 containerd[1903]: time="2025-09-12T17:50:37.270847343Z" level=info msg="CreateContainer within sandbox \"1039eccfc8bb56ef702941c5e22345b057c9fa93e2948fabd946a27ba2a1cf81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e\"" Sep 12 17:50:37.271837 containerd[1903]: time="2025-09-12T17:50:37.271793613Z" level=info msg="StartContainer for \"688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e\"" Sep 12 17:50:37.274326 containerd[1903]: time="2025-09-12T17:50:37.274282777Z" level=info msg="connecting to shim 688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e" address="unix:///run/containerd/s/742bb0140b86685d9a1846cf532acc1bb95904f09e9c71ef49a13bfd9a63f896" protocol=ttrpc version=3 Sep 12 17:50:37.276410 containerd[1903]: time="2025-09-12T17:50:37.276106280Z" level=info msg="CreateContainer within sandbox \"ea279593439ec2a04378ddc456d93dd8c6844c7c3a419e55392f00b7450f56f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b\"" Sep 12 17:50:37.276868 containerd[1903]: time="2025-09-12T17:50:37.276821017Z" level=info msg="StartContainer for \"5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b\"" Sep 12 17:50:37.279367 containerd[1903]: time="2025-09-12T17:50:37.279237498Z" level=info msg="connecting to shim 5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b" address="unix:///run/containerd/s/c3af82bf412bdb2af8d712bb3b12ef3e239b7af02b554f02bb0fb6466106707c" protocol=ttrpc version=3 Sep 12 17:50:37.309260 systemd[1]: Started cri-containerd-5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b.scope - libcontainer container 5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b. Sep 12 17:50:37.310772 systemd[1]: Started cri-containerd-688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e.scope - libcontainer container 688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e. Sep 12 17:50:37.388654 containerd[1903]: time="2025-09-12T17:50:37.388609768Z" level=info msg="StartContainer for \"688c8b0eb3c16aa8f5385f5223555c34ee2661608ba2531371221a71a76d756e\" returns successfully" Sep 12 17:50:37.388887 containerd[1903]: time="2025-09-12T17:50:37.388802676Z" level=info msg="StartContainer for \"5328c139dcfcb7bc7b482574fe4af5d3c05db3be24ac981237b12ed06792c86b\" returns successfully" Sep 12 17:50:37.929169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573336615.mount: Deactivated successfully. Sep 12 17:50:38.210149 kubelet[3219]: I0912 17:50:38.207523 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-js5ds" podStartSLOduration=24.20750937 podStartE2EDuration="24.20750937s" podCreationTimestamp="2025-09-12 17:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:38.205592632 +0000 UTC m=+28.482124262" watchObservedRunningTime="2025-09-12 17:50:38.20750937 +0000 UTC m=+28.484041006" Sep 12 17:50:38.227318 kubelet[3219]: I0912 17:50:38.227237 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dxn4f" podStartSLOduration=24.227200148 podStartE2EDuration="24.227200148s" podCreationTimestamp="2025-09-12 17:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:50:38.225016083 +0000 UTC m=+28.501547720" watchObservedRunningTime="2025-09-12 17:50:38.227200148 +0000 UTC m=+28.503731785" Sep 12 17:50:44.529392 systemd[1]: Started sshd@7-172.31.28.120:22-139.178.68.195:38428.service - OpenSSH per-connection server daemon (139.178.68.195:38428). Sep 12 17:50:44.747711 sshd[4698]: Accepted publickey for core from 139.178.68.195 port 38428 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:50:44.750045 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:44.757412 systemd-logind[1859]: New session 8 of user core. Sep 12 17:50:44.762260 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:50:45.637768 sshd[4701]: Connection closed by 139.178.68.195 port 38428 Sep 12 17:50:45.638589 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:45.645011 systemd[1]: sshd@7-172.31.28.120:22-139.178.68.195:38428.service: Deactivated successfully. Sep 12 17:50:45.647397 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:50:45.648604 systemd-logind[1859]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:50:45.650432 systemd-logind[1859]: Removed session 8. Sep 12 17:50:50.673826 systemd[1]: Started sshd@8-172.31.28.120:22-139.178.68.195:37462.service - OpenSSH per-connection server daemon (139.178.68.195:37462). Sep 12 17:50:50.856850 sshd[4721]: Accepted publickey for core from 139.178.68.195 port 37462 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:50:50.858654 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:50.865070 systemd-logind[1859]: New session 9 of user core. Sep 12 17:50:50.874327 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:50:51.088527 sshd[4724]: Connection closed by 139.178.68.195 port 37462 Sep 12 17:50:51.089789 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:51.093876 systemd[1]: sshd@8-172.31.28.120:22-139.178.68.195:37462.service: Deactivated successfully. Sep 12 17:50:51.096486 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:50:51.098349 systemd-logind[1859]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:50:51.100872 systemd-logind[1859]: Removed session 9. Sep 12 17:50:56.124850 systemd[1]: Started sshd@9-172.31.28.120:22-139.178.68.195:37466.service - OpenSSH per-connection server daemon (139.178.68.195:37466). Sep 12 17:50:56.310866 sshd[4737]: Accepted publickey for core from 139.178.68.195 port 37466 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:50:56.312425 sshd-session[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:56.318817 systemd-logind[1859]: New session 10 of user core. Sep 12 17:50:56.327335 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:50:56.529876 sshd[4740]: Connection closed by 139.178.68.195 port 37466 Sep 12 17:50:56.530886 sshd-session[4737]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:56.534707 systemd[1]: sshd@9-172.31.28.120:22-139.178.68.195:37466.service: Deactivated successfully. Sep 12 17:50:56.536582 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:50:56.539515 systemd-logind[1859]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:50:56.540755 systemd-logind[1859]: Removed session 10. Sep 12 17:51:01.576508 systemd[1]: Started sshd@10-172.31.28.120:22-139.178.68.195:47154.service - OpenSSH per-connection server daemon (139.178.68.195:47154). Sep 12 17:51:01.970712 sshd[4753]: Accepted publickey for core from 139.178.68.195 port 47154 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:02.004826 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:02.170101 systemd-logind[1859]: New session 11 of user core. Sep 12 17:51:02.198778 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:51:02.835118 sshd[4756]: Connection closed by 139.178.68.195 port 47154 Sep 12 17:51:02.836904 sshd-session[4753]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:02.852711 systemd[1]: sshd@10-172.31.28.120:22-139.178.68.195:47154.service: Deactivated successfully. Sep 12 17:51:02.868038 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:51:02.938347 systemd-logind[1859]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:51:02.950896 systemd[1]: Started sshd@11-172.31.28.120:22-139.178.68.195:47166.service - OpenSSH per-connection server daemon (139.178.68.195:47166). Sep 12 17:51:02.959558 systemd-logind[1859]: Removed session 11. Sep 12 17:51:03.280726 sshd[4769]: Accepted publickey for core from 139.178.68.195 port 47166 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:03.288867 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:03.339303 systemd-logind[1859]: New session 12 of user core. Sep 12 17:51:03.357698 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:51:04.054725 sshd[4772]: Connection closed by 139.178.68.195 port 47166 Sep 12 17:51:04.055778 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:04.066301 systemd[1]: sshd@11-172.31.28.120:22-139.178.68.195:47166.service: Deactivated successfully. Sep 12 17:51:04.072540 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:51:04.076393 systemd-logind[1859]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:51:04.093344 systemd[1]: Started sshd@12-172.31.28.120:22-139.178.68.195:47168.service - OpenSSH per-connection server daemon (139.178.68.195:47168). Sep 12 17:51:04.096296 systemd-logind[1859]: Removed session 12. Sep 12 17:51:04.297850 sshd[4782]: Accepted publickey for core from 139.178.68.195 port 47168 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:04.306551 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:04.318626 systemd-logind[1859]: New session 13 of user core. Sep 12 17:51:04.331459 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:51:04.602868 sshd[4785]: Connection closed by 139.178.68.195 port 47168 Sep 12 17:51:04.605320 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:04.616187 systemd[1]: sshd@12-172.31.28.120:22-139.178.68.195:47168.service: Deactivated successfully. Sep 12 17:51:04.622158 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:51:04.625130 systemd-logind[1859]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:51:04.630695 systemd-logind[1859]: Removed session 13. Sep 12 17:51:09.635326 systemd[1]: Started sshd@13-172.31.28.120:22-139.178.68.195:47174.service - OpenSSH per-connection server daemon (139.178.68.195:47174). Sep 12 17:51:09.807602 sshd[4798]: Accepted publickey for core from 139.178.68.195 port 47174 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:09.809365 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:09.814575 systemd-logind[1859]: New session 14 of user core. Sep 12 17:51:09.823299 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:51:10.034406 sshd[4801]: Connection closed by 139.178.68.195 port 47174 Sep 12 17:51:10.035261 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:10.039379 systemd-logind[1859]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:51:10.039851 systemd[1]: sshd@13-172.31.28.120:22-139.178.68.195:47174.service: Deactivated successfully. Sep 12 17:51:10.042117 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:51:10.043882 systemd-logind[1859]: Removed session 14. Sep 12 17:51:15.075421 systemd[1]: Started sshd@14-172.31.28.120:22-139.178.68.195:60330.service - OpenSSH per-connection server daemon (139.178.68.195:60330). Sep 12 17:51:15.252146 sshd[4814]: Accepted publickey for core from 139.178.68.195 port 60330 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:15.253684 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:15.259187 systemd-logind[1859]: New session 15 of user core. Sep 12 17:51:15.269516 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:51:15.452171 sshd[4817]: Connection closed by 139.178.68.195 port 60330 Sep 12 17:51:15.452308 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:15.458535 systemd[1]: sshd@14-172.31.28.120:22-139.178.68.195:60330.service: Deactivated successfully. Sep 12 17:51:15.460713 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:51:15.462611 systemd-logind[1859]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:51:15.464087 systemd-logind[1859]: Removed session 15. Sep 12 17:51:15.483046 systemd[1]: Started sshd@15-172.31.28.120:22-139.178.68.195:60334.service - OpenSSH per-connection server daemon (139.178.68.195:60334). Sep 12 17:51:15.651613 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 60334 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:15.653361 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:15.659228 systemd-logind[1859]: New session 16 of user core. Sep 12 17:51:15.663227 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:51:16.363787 sshd[4832]: Connection closed by 139.178.68.195 port 60334 Sep 12 17:51:16.364806 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:16.382447 systemd[1]: sshd@15-172.31.28.120:22-139.178.68.195:60334.service: Deactivated successfully. Sep 12 17:51:16.385643 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:51:16.388422 systemd-logind[1859]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:51:16.402026 systemd[1]: Started sshd@16-172.31.28.120:22-139.178.68.195:60342.service - OpenSSH per-connection server daemon (139.178.68.195:60342). Sep 12 17:51:16.403459 systemd-logind[1859]: Removed session 16. Sep 12 17:51:16.603618 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 60342 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:16.606148 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:16.616141 systemd-logind[1859]: New session 17 of user core. Sep 12 17:51:16.621400 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:51:17.292250 sshd[4847]: Connection closed by 139.178.68.195 port 60342 Sep 12 17:51:17.294242 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:17.307587 systemd[1]: sshd@16-172.31.28.120:22-139.178.68.195:60342.service: Deactivated successfully. Sep 12 17:51:17.310813 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:51:17.313275 systemd-logind[1859]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:51:17.328131 systemd-logind[1859]: Removed session 17. Sep 12 17:51:17.328362 systemd[1]: Started sshd@17-172.31.28.120:22-139.178.68.195:60350.service - OpenSSH per-connection server daemon (139.178.68.195:60350). Sep 12 17:51:17.506308 sshd[4864]: Accepted publickey for core from 139.178.68.195 port 60350 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:17.507736 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:17.513506 systemd-logind[1859]: New session 18 of user core. Sep 12 17:51:17.531304 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:51:17.885768 sshd[4867]: Connection closed by 139.178.68.195 port 60350 Sep 12 17:51:17.887120 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:17.895623 systemd-logind[1859]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:51:17.896516 systemd[1]: sshd@17-172.31.28.120:22-139.178.68.195:60350.service: Deactivated successfully. Sep 12 17:51:17.899220 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:51:17.901877 systemd-logind[1859]: Removed session 18. Sep 12 17:51:17.923269 systemd[1]: Started sshd@18-172.31.28.120:22-139.178.68.195:60354.service - OpenSSH per-connection server daemon (139.178.68.195:60354). Sep 12 17:51:18.097980 sshd[4877]: Accepted publickey for core from 139.178.68.195 port 60354 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:18.099550 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:18.105781 systemd-logind[1859]: New session 19 of user core. Sep 12 17:51:18.112296 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:51:18.300622 sshd[4880]: Connection closed by 139.178.68.195 port 60354 Sep 12 17:51:18.302670 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:18.306088 systemd[1]: sshd@18-172.31.28.120:22-139.178.68.195:60354.service: Deactivated successfully. Sep 12 17:51:18.308445 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:51:18.311157 systemd-logind[1859]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:51:18.312160 systemd-logind[1859]: Removed session 19. Sep 12 17:51:23.332462 systemd[1]: Started sshd@19-172.31.28.120:22-139.178.68.195:60724.service - OpenSSH per-connection server daemon (139.178.68.195:60724). Sep 12 17:51:23.504035 sshd[4894]: Accepted publickey for core from 139.178.68.195 port 60724 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:23.505571 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:23.511145 systemd-logind[1859]: New session 20 of user core. Sep 12 17:51:23.520432 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:51:23.705712 sshd[4897]: Connection closed by 139.178.68.195 port 60724 Sep 12 17:51:23.706291 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:23.710284 systemd[1]: sshd@19-172.31.28.120:22-139.178.68.195:60724.service: Deactivated successfully. Sep 12 17:51:23.714002 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:51:23.716683 systemd-logind[1859]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:51:23.718370 systemd-logind[1859]: Removed session 20. Sep 12 17:51:28.740179 systemd[1]: Started sshd@20-172.31.28.120:22-139.178.68.195:60730.service - OpenSSH per-connection server daemon (139.178.68.195:60730). Sep 12 17:51:28.918701 sshd[4909]: Accepted publickey for core from 139.178.68.195 port 60730 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:28.920259 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:28.926021 systemd-logind[1859]: New session 21 of user core. Sep 12 17:51:28.932291 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:51:29.119555 sshd[4912]: Connection closed by 139.178.68.195 port 60730 Sep 12 17:51:29.120298 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:29.124430 systemd[1]: sshd@20-172.31.28.120:22-139.178.68.195:60730.service: Deactivated successfully. Sep 12 17:51:29.126393 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:51:29.127684 systemd-logind[1859]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:51:29.129615 systemd-logind[1859]: Removed session 21. Sep 12 17:51:34.154262 systemd[1]: Started sshd@21-172.31.28.120:22-139.178.68.195:51002.service - OpenSSH per-connection server daemon (139.178.68.195:51002). Sep 12 17:51:34.331146 sshd[4924]: Accepted publickey for core from 139.178.68.195 port 51002 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:34.332477 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:34.338137 systemd-logind[1859]: New session 22 of user core. Sep 12 17:51:34.345804 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:51:34.537412 sshd[4927]: Connection closed by 139.178.68.195 port 51002 Sep 12 17:51:34.538209 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:34.542980 systemd-logind[1859]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:51:34.543198 systemd[1]: sshd@21-172.31.28.120:22-139.178.68.195:51002.service: Deactivated successfully. Sep 12 17:51:34.545292 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:51:34.548155 systemd-logind[1859]: Removed session 22. Sep 12 17:51:34.575508 systemd[1]: Started sshd@22-172.31.28.120:22-139.178.68.195:51016.service - OpenSSH per-connection server daemon (139.178.68.195:51016). Sep 12 17:51:34.739947 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 51016 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:34.741674 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:34.746586 systemd-logind[1859]: New session 23 of user core. Sep 12 17:51:34.751258 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:51:36.217757 containerd[1903]: time="2025-09-12T17:51:36.217568843Z" level=info msg="StopContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" with timeout 30 (s)" Sep 12 17:51:36.225260 containerd[1903]: time="2025-09-12T17:51:36.225222491Z" level=info msg="Stop container \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" with signal terminated" Sep 12 17:51:36.241242 systemd[1]: cri-containerd-4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c.scope: Deactivated successfully. Sep 12 17:51:36.244477 containerd[1903]: time="2025-09-12T17:51:36.244435785Z" level=info msg="received exit event container_id:\"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" id:\"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" pid:4002 exited_at:{seconds:1757699496 nanos:243861256}" Sep 12 17:51:36.244861 containerd[1903]: time="2025-09-12T17:51:36.244671900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" id:\"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" pid:4002 exited_at:{seconds:1757699496 nanos:243861256}" Sep 12 17:51:36.268992 containerd[1903]: time="2025-09-12T17:51:36.268923301Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:51:36.273322 containerd[1903]: time="2025-09-12T17:51:36.273258320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" id:\"e5c2f4dd36a0f0f95f0e1055ab212c9d2b20b3691b5de1c2f21cd1927a311298\" pid:4961 exited_at:{seconds:1757699496 nanos:271244248}" Sep 12 17:51:36.278530 containerd[1903]: time="2025-09-12T17:51:36.278467107Z" level=info msg="StopContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" with timeout 2 (s)" Sep 12 17:51:36.279096 containerd[1903]: time="2025-09-12T17:51:36.279030691Z" level=info msg="Stop container \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" with signal terminated" Sep 12 17:51:36.290362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c-rootfs.mount: Deactivated successfully. Sep 12 17:51:36.295960 systemd-networkd[1824]: lxc_health: Link DOWN Sep 12 17:51:36.295971 systemd-networkd[1824]: lxc_health: Lost carrier Sep 12 17:51:36.313303 containerd[1903]: time="2025-09-12T17:51:36.313260214Z" level=info msg="StopContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" returns successfully" Sep 12 17:51:36.313937 containerd[1903]: time="2025-09-12T17:51:36.313911059Z" level=info msg="StopPodSandbox for \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\"" Sep 12 17:51:36.314009 containerd[1903]: time="2025-09-12T17:51:36.313976329Z" level=info msg="Container to stop \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.318386 systemd[1]: cri-containerd-795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975.scope: Deactivated successfully. Sep 12 17:51:36.319389 systemd[1]: cri-containerd-795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975.scope: Consumed 8.242s CPU time, 219.9M memory peak, 99.6M read from disk, 13.3M written to disk. Sep 12 17:51:36.320935 containerd[1903]: time="2025-09-12T17:51:36.320808143Z" level=info msg="received exit event container_id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" pid:4034 exited_at:{seconds:1757699496 nanos:320205907}" Sep 12 17:51:36.321583 containerd[1903]: time="2025-09-12T17:51:36.321543596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" id:\"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" pid:4034 exited_at:{seconds:1757699496 nanos:320205907}" Sep 12 17:51:36.327749 systemd[1]: cri-containerd-a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399.scope: Deactivated successfully. Sep 12 17:51:36.331925 containerd[1903]: time="2025-09-12T17:51:36.331878080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" id:\"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" pid:3419 exit_status:137 exited_at:{seconds:1757699496 nanos:330697335}" Sep 12 17:51:36.371166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975-rootfs.mount: Deactivated successfully. Sep 12 17:51:36.384591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399-rootfs.mount: Deactivated successfully. Sep 12 17:51:36.387877 containerd[1903]: time="2025-09-12T17:51:36.387836863Z" level=info msg="shim disconnected" id=a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399 namespace=k8s.io Sep 12 17:51:36.387877 containerd[1903]: time="2025-09-12T17:51:36.387877379Z" level=warning msg="cleaning up after shim disconnected" id=a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399 namespace=k8s.io Sep 12 17:51:36.395430 containerd[1903]: time="2025-09-12T17:51:36.387888348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:51:36.396214 containerd[1903]: time="2025-09-12T17:51:36.388128230Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Sep 12 17:51:36.396885 containerd[1903]: time="2025-09-12T17:51:36.396855658Z" level=info msg="StopContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" returns successfully" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397390631Z" level=info msg="StopPodSandbox for \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\"" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397476170Z" level=info msg="Container to stop \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397488144Z" level=info msg="Container to stop \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397495969Z" level=info msg="Container to stop \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397508795Z" level=info msg="Container to stop \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.397711 containerd[1903]: time="2025-09-12T17:51:36.397521874Z" level=info msg="Container to stop \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:36.409587 systemd[1]: cri-containerd-abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9.scope: Deactivated successfully. Sep 12 17:51:36.452123 containerd[1903]: time="2025-09-12T17:51:36.451918780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" pid:3364 exit_status:137 exited_at:{seconds:1757699496 nanos:415293891}" Sep 12 17:51:36.457410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9-rootfs.mount: Deactivated successfully. Sep 12 17:51:36.460212 containerd[1903]: time="2025-09-12T17:51:36.458455140Z" level=info msg="received exit event sandbox_id:\"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" exit_status:137 exited_at:{seconds:1757699496 nanos:330697335}" Sep 12 17:51:36.463560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399-shm.mount: Deactivated successfully. Sep 12 17:51:36.466931 containerd[1903]: time="2025-09-12T17:51:36.466889460Z" level=info msg="TearDown network for sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" successfully" Sep 12 17:51:36.466931 containerd[1903]: time="2025-09-12T17:51:36.466927802Z" level=info msg="StopPodSandbox for \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" returns successfully" Sep 12 17:51:36.471042 containerd[1903]: time="2025-09-12T17:51:36.470924239Z" level=error msg="Failed to handle event container_id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" pid:3364 exit_status:137 exited_at:{seconds:1757699496 nanos:415293891} for abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 12 17:51:36.471042 containerd[1903]: time="2025-09-12T17:51:36.471013834Z" level=info msg="shim disconnected" id=abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9 namespace=k8s.io Sep 12 17:51:36.471042 containerd[1903]: time="2025-09-12T17:51:36.471034232Z" level=warning msg="cleaning up after shim disconnected" id=abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9 namespace=k8s.io Sep 12 17:51:36.471916 containerd[1903]: time="2025-09-12T17:51:36.471043020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:51:36.495280 containerd[1903]: time="2025-09-12T17:51:36.495163901Z" level=info msg="received exit event sandbox_id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" exit_status:137 exited_at:{seconds:1757699496 nanos:415293891}" Sep 12 17:51:36.496515 containerd[1903]: time="2025-09-12T17:51:36.496071303Z" level=info msg="TearDown network for sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" successfully" Sep 12 17:51:36.496515 containerd[1903]: time="2025-09-12T17:51:36.496509743Z" level=info msg="StopPodSandbox for \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" returns successfully" Sep 12 17:51:36.514847 kubelet[3219]: I0912 17:51:36.514785 3219 scope.go:117] "RemoveContainer" containerID="4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c" Sep 12 17:51:36.518658 containerd[1903]: time="2025-09-12T17:51:36.518621516Z" level=info msg="RemoveContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\"" Sep 12 17:51:36.526490 containerd[1903]: time="2025-09-12T17:51:36.526411450Z" level=info msg="RemoveContainer for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" returns successfully" Sep 12 17:51:36.526976 kubelet[3219]: I0912 17:51:36.526730 3219 scope.go:117] "RemoveContainer" containerID="4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c" Sep 12 17:51:36.527159 containerd[1903]: time="2025-09-12T17:51:36.527003917Z" level=error msg="ContainerStatus for \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\": not found" Sep 12 17:51:36.527311 kubelet[3219]: E0912 17:51:36.527258 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\": not found" containerID="4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c" Sep 12 17:51:36.527655 kubelet[3219]: I0912 17:51:36.527351 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c"} err="failed to get container status \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4deceac73084e59f416ed0bf7d96019ff8a58aee326e56561330b819af1e129c\": not found" Sep 12 17:51:36.527655 kubelet[3219]: I0912 17:51:36.527606 3219 scope.go:117] "RemoveContainer" containerID="795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975" Sep 12 17:51:36.529764 containerd[1903]: time="2025-09-12T17:51:36.529585233Z" level=info msg="RemoveContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\"" Sep 12 17:51:36.542697 containerd[1903]: time="2025-09-12T17:51:36.542638656Z" level=info msg="RemoveContainer for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" returns successfully" Sep 12 17:51:36.543282 kubelet[3219]: I0912 17:51:36.543248 3219 scope.go:117] "RemoveContainer" containerID="f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282" Sep 12 17:51:36.546430 containerd[1903]: time="2025-09-12T17:51:36.545855558Z" level=info msg="RemoveContainer for \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\"" Sep 12 17:51:36.557825 kubelet[3219]: I0912 17:51:36.557143 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hubble-tls\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.557825 kubelet[3219]: I0912 17:51:36.557184 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-lib-modules\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.557825 kubelet[3219]: I0912 17:51:36.557208 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hostproc\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.557825 kubelet[3219]: I0912 17:51:36.557229 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cni-path\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.557875 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-cilium-config-path\") pod \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\" (UID: \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.557935 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-kernel\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.557954 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-etc-cni-netd\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.557998 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwshc\" (UniqueName: \"kubernetes.io/projected/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-kube-api-access-wwshc\") pod \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\" (UID: \"ed943fbf-2c62-41dd-a9df-ad0b568c95fd\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.558024 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-bpf-maps\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558117 kubelet[3219]: I0912 17:51:36.558078 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-net\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558103 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-cgroup\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558140 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-xtables-lock\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558169 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-clustermesh-secrets\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558196 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59s2\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-kube-api-access-k59s2\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558237 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-run\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.558361 kubelet[3219]: I0912 17:51:36.558268 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-config-path\") pod \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\" (UID: \"f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460\") " Sep 12 17:51:36.562431 kubelet[3219]: I0912 17:51:36.562384 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.564448 kubelet[3219]: I0912 17:51:36.562462 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hostproc" (OuterVolumeSpecName: "hostproc") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.564448 kubelet[3219]: I0912 17:51:36.562482 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cni-path" (OuterVolumeSpecName: "cni-path") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.568075 kubelet[3219]: I0912 17:51:36.566333 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.568075 kubelet[3219]: I0912 17:51:36.566383 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.569462 kubelet[3219]: I0912 17:51:36.569297 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.569587 kubelet[3219]: I0912 17:51:36.569520 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.569587 kubelet[3219]: I0912 17:51:36.569548 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.569587 kubelet[3219]: I0912 17:51:36.569581 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.574081 kubelet[3219]: I0912 17:51:36.572215 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:51:36.574081 kubelet[3219]: I0912 17:51:36.572317 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:51:36.574787 containerd[1903]: time="2025-09-12T17:51:36.574740963Z" level=info msg="RemoveContainer for \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" returns successfully" Sep 12 17:51:36.575520 kubelet[3219]: I0912 17:51:36.575414 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed943fbf-2c62-41dd-a9df-ad0b568c95fd" (UID: "ed943fbf-2c62-41dd-a9df-ad0b568c95fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:51:36.575697 kubelet[3219]: I0912 17:51:36.575673 3219 scope.go:117] "RemoveContainer" containerID="8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be" Sep 12 17:51:36.578573 kubelet[3219]: I0912 17:51:36.578537 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-kube-api-access-wwshc" (OuterVolumeSpecName: "kube-api-access-wwshc") pod "ed943fbf-2c62-41dd-a9df-ad0b568c95fd" (UID: "ed943fbf-2c62-41dd-a9df-ad0b568c95fd"). InnerVolumeSpecName "kube-api-access-wwshc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:51:36.579322 kubelet[3219]: I0912 17:51:36.579300 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:51:36.581391 kubelet[3219]: I0912 17:51:36.581364 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:51:36.582075 containerd[1903]: time="2025-09-12T17:51:36.581827435Z" level=info msg="RemoveContainer for \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\"" Sep 12 17:51:36.584009 kubelet[3219]: I0912 17:51:36.583986 3219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-kube-api-access-k59s2" (OuterVolumeSpecName: "kube-api-access-k59s2") pod "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" (UID: "f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460"). InnerVolumeSpecName "kube-api-access-k59s2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:51:36.588567 containerd[1903]: time="2025-09-12T17:51:36.588526716Z" level=info msg="RemoveContainer for \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" returns successfully" Sep 12 17:51:36.589142 kubelet[3219]: I0912 17:51:36.589117 3219 scope.go:117] "RemoveContainer" containerID="b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5" Sep 12 17:51:36.592090 containerd[1903]: time="2025-09-12T17:51:36.591885643Z" level=info msg="RemoveContainer for \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\"" Sep 12 17:51:36.598337 containerd[1903]: time="2025-09-12T17:51:36.598237538Z" level=info msg="RemoveContainer for \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" returns successfully" Sep 12 17:51:36.598574 kubelet[3219]: I0912 17:51:36.598551 3219 scope.go:117] "RemoveContainer" containerID="eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac" Sep 12 17:51:36.600839 containerd[1903]: time="2025-09-12T17:51:36.600802990Z" level=info msg="RemoveContainer for \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\"" Sep 12 17:51:36.608079 containerd[1903]: time="2025-09-12T17:51:36.607143008Z" level=info msg="RemoveContainer for \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" returns successfully" Sep 12 17:51:36.609292 kubelet[3219]: I0912 17:51:36.609267 3219 scope.go:117] "RemoveContainer" containerID="795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975" Sep 12 17:51:36.610844 containerd[1903]: time="2025-09-12T17:51:36.610802371Z" level=error msg="ContainerStatus for \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\": not found" Sep 12 17:51:36.611211 kubelet[3219]: E0912 17:51:36.611046 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\": not found" containerID="795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975" Sep 12 17:51:36.611211 kubelet[3219]: I0912 17:51:36.611105 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975"} err="failed to get container status \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\": rpc error: code = NotFound desc = an error occurred when try to find container \"795f5590db74ca0d95126c56d64b90e9c4599df41c175049ec431406fa24d975\": not found" Sep 12 17:51:36.611211 kubelet[3219]: I0912 17:51:36.611128 3219 scope.go:117] "RemoveContainer" containerID="f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282" Sep 12 17:51:36.612139 containerd[1903]: time="2025-09-12T17:51:36.612091604Z" level=error msg="ContainerStatus for \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\": not found" Sep 12 17:51:36.612254 kubelet[3219]: E0912 17:51:36.612231 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\": not found" containerID="f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282" Sep 12 17:51:36.612308 kubelet[3219]: I0912 17:51:36.612262 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282"} err="failed to get container status \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\": rpc error: code = NotFound desc = an error occurred when try to find container \"f50d260f993b95c6d889428a18f75b0e725eadf5e6fc5fe11364574071a7a282\": not found" Sep 12 17:51:36.612308 kubelet[3219]: I0912 17:51:36.612279 3219 scope.go:117] "RemoveContainer" containerID="8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be" Sep 12 17:51:36.612535 containerd[1903]: time="2025-09-12T17:51:36.612509856Z" level=error msg="ContainerStatus for \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\": not found" Sep 12 17:51:36.612630 kubelet[3219]: E0912 17:51:36.612611 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\": not found" containerID="8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be" Sep 12 17:51:36.612701 kubelet[3219]: I0912 17:51:36.612635 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be"} err="failed to get container status \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\": rpc error: code = NotFound desc = an error occurred when try to find container \"8415cd7905762243013dc8c6ab48db5dee2492c2d64ecbdcaa6ef40dcf3df9be\": not found" Sep 12 17:51:36.612701 kubelet[3219]: I0912 17:51:36.612650 3219 scope.go:117] "RemoveContainer" containerID="b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5" Sep 12 17:51:36.612878 containerd[1903]: time="2025-09-12T17:51:36.612853857Z" level=error msg="ContainerStatus for \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\": not found" Sep 12 17:51:36.612980 kubelet[3219]: E0912 17:51:36.612961 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\": not found" containerID="b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5" Sep 12 17:51:36.613022 kubelet[3219]: I0912 17:51:36.613007 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5"} err="failed to get container status \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3b1305a24d7d1ce15d7127c907e927c7edde2453afd6f1ddd782769055f0ac5\": not found" Sep 12 17:51:36.613052 kubelet[3219]: I0912 17:51:36.613024 3219 scope.go:117] "RemoveContainer" containerID="eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac" Sep 12 17:51:36.613412 containerd[1903]: time="2025-09-12T17:51:36.613377548Z" level=error msg="ContainerStatus for \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\": not found" Sep 12 17:51:36.613912 kubelet[3219]: E0912 17:51:36.613890 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\": not found" containerID="eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac" Sep 12 17:51:36.614041 kubelet[3219]: I0912 17:51:36.614018 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac"} err="failed to get container status \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\": rpc error: code = NotFound desc = an error occurred when try to find container \"eccd540d7f5cf5e0914075ef38cf8a2353555a3a68718b8343fdce35e851dcac\": not found" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661457 3219 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hubble-tls\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661492 3219 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-lib-modules\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661502 3219 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-hostproc\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661512 3219 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cni-path\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661522 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-cilium-config-path\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661533 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-kernel\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661524 kubelet[3219]: I0912 17:51:36.661543 3219 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-etc-cni-netd\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661551 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwshc\" (UniqueName: \"kubernetes.io/projected/ed943fbf-2c62-41dd-a9df-ad0b568c95fd-kube-api-access-wwshc\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661558 3219 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-bpf-maps\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661566 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-host-proc-sys-net\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661574 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-cgroup\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661582 3219 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-xtables-lock\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661590 3219 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-clustermesh-secrets\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661598 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k59s2\" (UniqueName: \"kubernetes.io/projected/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-kube-api-access-k59s2\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.661823 kubelet[3219]: I0912 17:51:36.661605 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-run\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.662748 kubelet[3219]: I0912 17:51:36.661614 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460-cilium-config-path\") on node \"ip-172-31-28-120\" DevicePath \"\"" Sep 12 17:51:36.824555 systemd[1]: Removed slice kubepods-besteffort-poded943fbf_2c62_41dd_a9df_ad0b568c95fd.slice - libcontainer container kubepods-besteffort-poded943fbf_2c62_41dd_a9df_ad0b568c95fd.slice. Sep 12 17:51:36.834405 systemd[1]: Removed slice kubepods-burstable-podf97f5d22_d7cf_4fd8_8cd8_86bd18c6d460.slice - libcontainer container kubepods-burstable-podf97f5d22_d7cf_4fd8_8cd8_86bd18c6d460.slice. Sep 12 17:51:36.834801 systemd[1]: kubepods-burstable-podf97f5d22_d7cf_4fd8_8cd8_86bd18c6d460.slice: Consumed 8.354s CPU time, 220.3M memory peak, 100.6M read from disk, 13.3M written to disk. Sep 12 17:51:37.289476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9-shm.mount: Deactivated successfully. Sep 12 17:51:37.289818 systemd[1]: var-lib-kubelet-pods-ed943fbf\x2d2c62\x2d41dd\x2da9df\x2dad0b568c95fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwshc.mount: Deactivated successfully. Sep 12 17:51:37.289897 systemd[1]: var-lib-kubelet-pods-f97f5d22\x2dd7cf\x2d4fd8\x2d8cd8\x2d86bd18c6d460-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk59s2.mount: Deactivated successfully. Sep 12 17:51:37.289962 systemd[1]: var-lib-kubelet-pods-f97f5d22\x2dd7cf\x2d4fd8\x2d8cd8\x2d86bd18c6d460-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:51:37.290031 systemd[1]: var-lib-kubelet-pods-f97f5d22\x2dd7cf\x2d4fd8\x2d8cd8\x2d86bd18c6d460-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:51:37.894326 kubelet[3219]: I0912 17:51:37.894294 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed943fbf-2c62-41dd-a9df-ad0b568c95fd" path="/var/lib/kubelet/pods/ed943fbf-2c62-41dd-a9df-ad0b568c95fd/volumes" Sep 12 17:51:37.895129 kubelet[3219]: I0912 17:51:37.895105 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460" path="/var/lib/kubelet/pods/f97f5d22-d7cf-4fd8-8cd8-86bd18c6d460/volumes" Sep 12 17:51:38.159683 sshd[4942]: Connection closed by 139.178.68.195 port 51016 Sep 12 17:51:38.161033 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:38.167303 systemd[1]: sshd@22-172.31.28.120:22-139.178.68.195:51016.service: Deactivated successfully. Sep 12 17:51:38.169739 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:51:38.170996 systemd-logind[1859]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:51:38.173443 systemd-logind[1859]: Removed session 23. Sep 12 17:51:38.192159 systemd[1]: Started sshd@23-172.31.28.120:22-139.178.68.195:51026.service - OpenSSH per-connection server daemon (139.178.68.195:51026). Sep 12 17:51:38.341402 ntpd[1854]: Deleting interface #11 lxc_health, fe80::40e1:70ff:feff:697a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs Sep 12 17:51:38.341747 ntpd[1854]: 12 Sep 17:51:38 ntpd[1854]: Deleting interface #11 lxc_health, fe80::40e1:70ff:feff:697a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs Sep 12 17:51:38.369696 sshd[5093]: Accepted publickey for core from 139.178.68.195 port 51026 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:38.371215 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:38.379406 systemd-logind[1859]: New session 24 of user core. Sep 12 17:51:38.384353 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:51:38.452352 containerd[1903]: time="2025-09-12T17:51:38.452202487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" id:\"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" pid:3364 exit_status:137 exited_at:{seconds:1757699496 nanos:415293891}" Sep 12 17:51:39.053128 sshd[5096]: Connection closed by 139.178.68.195 port 51026 Sep 12 17:51:39.055201 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:39.062824 systemd[1]: sshd@23-172.31.28.120:22-139.178.68.195:51026.service: Deactivated successfully. Sep 12 17:51:39.065993 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:51:39.071036 systemd-logind[1859]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:51:39.098400 systemd-logind[1859]: Removed session 24. Sep 12 17:51:39.101376 systemd[1]: Started sshd@24-172.31.28.120:22-139.178.68.195:51040.service - OpenSSH per-connection server daemon (139.178.68.195:51040). Sep 12 17:51:39.110238 systemd[1]: Created slice kubepods-burstable-pod50f7b12b_6081_482f_a877_cddf69244197.slice - libcontainer container kubepods-burstable-pod50f7b12b_6081_482f_a877_cddf69244197.slice. Sep 12 17:51:39.176466 kubelet[3219]: I0912 17:51:39.176415 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-cni-path\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177428 kubelet[3219]: I0912 17:51:39.177051 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-host-proc-sys-net\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177428 kubelet[3219]: I0912 17:51:39.177204 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-hostproc\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177428 kubelet[3219]: I0912 17:51:39.177326 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50f7b12b-6081-482f-a877-cddf69244197-hubble-tls\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177428 kubelet[3219]: I0912 17:51:39.177358 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-cilium-run\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177763 kubelet[3219]: I0912 17:51:39.177479 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50f7b12b-6081-482f-a877-cddf69244197-clustermesh-secrets\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.177763 kubelet[3219]: I0912 17:51:39.177507 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brz22\" (UniqueName: \"kubernetes.io/projected/50f7b12b-6081-482f-a877-cddf69244197-kube-api-access-brz22\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.177533 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-bpf-maps\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.177960 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-etc-cni-netd\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.177987 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-host-proc-sys-kernel\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.178039 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-cilium-cgroup\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.178099 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50f7b12b-6081-482f-a877-cddf69244197-cilium-config-path\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178150 kubelet[3219]: I0912 17:51:39.178123 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50f7b12b-6081-482f-a877-cddf69244197-cilium-ipsec-secrets\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178585 kubelet[3219]: I0912 17:51:39.178456 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-lib-modules\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.178585 kubelet[3219]: I0912 17:51:39.178518 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50f7b12b-6081-482f-a877-cddf69244197-xtables-lock\") pod \"cilium-c6rc4\" (UID: \"50f7b12b-6081-482f-a877-cddf69244197\") " pod="kube-system/cilium-c6rc4" Sep 12 17:51:39.308820 sshd[5107]: Accepted publickey for core from 139.178.68.195 port 51040 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:39.314989 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:39.324154 systemd-logind[1859]: New session 25 of user core. Sep 12 17:51:39.332034 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:51:39.419621 containerd[1903]: time="2025-09-12T17:51:39.419550275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6rc4,Uid:50f7b12b-6081-482f-a877-cddf69244197,Namespace:kube-system,Attempt:0,}" Sep 12 17:51:39.451258 containerd[1903]: time="2025-09-12T17:51:39.451204569Z" level=info msg="connecting to shim d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:51:39.454204 sshd[5114]: Connection closed by 139.178.68.195 port 51040 Sep 12 17:51:39.455263 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:39.461175 systemd-logind[1859]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:51:39.462137 systemd[1]: sshd@24-172.31.28.120:22-139.178.68.195:51040.service: Deactivated successfully. Sep 12 17:51:39.471947 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:51:39.488535 systemd-logind[1859]: Removed session 25. Sep 12 17:51:39.499337 systemd[1]: Started cri-containerd-d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2.scope - libcontainer container d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2. Sep 12 17:51:39.501235 systemd[1]: Started sshd@25-172.31.28.120:22-139.178.68.195:51054.service - OpenSSH per-connection server daemon (139.178.68.195:51054). Sep 12 17:51:39.533747 containerd[1903]: time="2025-09-12T17:51:39.533715767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6rc4,Uid:50f7b12b-6081-482f-a877-cddf69244197,Namespace:kube-system,Attempt:0,} returns sandbox id \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\"" Sep 12 17:51:39.542415 containerd[1903]: time="2025-09-12T17:51:39.542375491Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:51:39.553493 containerd[1903]: time="2025-09-12T17:51:39.553455686Z" level=info msg="Container dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:39.565603 containerd[1903]: time="2025-09-12T17:51:39.565398887Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\"" Sep 12 17:51:39.566793 containerd[1903]: time="2025-09-12T17:51:39.566720884Z" level=info msg="StartContainer for \"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\"" Sep 12 17:51:39.567841 containerd[1903]: time="2025-09-12T17:51:39.567808657Z" level=info msg="connecting to shim dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" protocol=ttrpc version=3 Sep 12 17:51:39.598300 systemd[1]: Started cri-containerd-dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9.scope - libcontainer container dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9. Sep 12 17:51:39.630011 containerd[1903]: time="2025-09-12T17:51:39.629975241Z" level=info msg="StartContainer for \"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\" returns successfully" Sep 12 17:51:39.650068 systemd[1]: cri-containerd-dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9.scope: Deactivated successfully. Sep 12 17:51:39.650793 systemd[1]: cri-containerd-dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9.scope: Consumed 22ms CPU time, 9.5M memory peak, 3M read from disk. Sep 12 17:51:39.654223 containerd[1903]: time="2025-09-12T17:51:39.654178216Z" level=info msg="received exit event container_id:\"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\" id:\"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\" pid:5180 exited_at:{seconds:1757699499 nanos:653784154}" Sep 12 17:51:39.654461 containerd[1903]: time="2025-09-12T17:51:39.654185215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\" id:\"dae3e6fdb2d9e8e884bba5ca91d929743e50c5ba4a65693893d3a81eaa74cdb9\" pid:5180 exited_at:{seconds:1757699499 nanos:653784154}" Sep 12 17:51:39.683948 sshd[5153]: Accepted publickey for core from 139.178.68.195 port 51054 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:51:39.685575 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:39.690378 systemd-logind[1859]: New session 26 of user core. Sep 12 17:51:39.700287 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:51:40.080275 kubelet[3219]: E0912 17:51:40.080223 3219 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:51:40.547364 containerd[1903]: time="2025-09-12T17:51:40.547324228Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:51:40.568085 containerd[1903]: time="2025-09-12T17:51:40.565234263Z" level=info msg="Container 0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:40.578896 containerd[1903]: time="2025-09-12T17:51:40.578855125Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\"" Sep 12 17:51:40.579590 containerd[1903]: time="2025-09-12T17:51:40.579565471Z" level=info msg="StartContainer for \"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\"" Sep 12 17:51:40.580937 containerd[1903]: time="2025-09-12T17:51:40.580743585Z" level=info msg="connecting to shim 0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" protocol=ttrpc version=3 Sep 12 17:51:40.602285 systemd[1]: Started cri-containerd-0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa.scope - libcontainer container 0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa. Sep 12 17:51:40.638975 containerd[1903]: time="2025-09-12T17:51:40.638312664Z" level=info msg="StartContainer for \"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\" returns successfully" Sep 12 17:51:40.654311 systemd[1]: cri-containerd-0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa.scope: Deactivated successfully. Sep 12 17:51:40.654663 systemd[1]: cri-containerd-0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa.scope: Consumed 22ms CPU time, 7.5M memory peak, 2.2M read from disk. Sep 12 17:51:40.655781 containerd[1903]: time="2025-09-12T17:51:40.655373271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\" id:\"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\" pid:5236 exited_at:{seconds:1757699500 nanos:654540238}" Sep 12 17:51:40.655781 containerd[1903]: time="2025-09-12T17:51:40.655520029Z" level=info msg="received exit event container_id:\"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\" id:\"0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa\" pid:5236 exited_at:{seconds:1757699500 nanos:654540238}" Sep 12 17:51:41.287160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f8e2fb1391713d90fd3c60a52e507e4bb32965f7fa97efc056fd7ae528651aa-rootfs.mount: Deactivated successfully. Sep 12 17:51:41.549330 containerd[1903]: time="2025-09-12T17:51:41.548762940Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:51:41.565387 containerd[1903]: time="2025-09-12T17:51:41.565344583Z" level=info msg="Container 737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:41.581510 containerd[1903]: time="2025-09-12T17:51:41.581458421Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\"" Sep 12 17:51:41.583633 containerd[1903]: time="2025-09-12T17:51:41.582368557Z" level=info msg="StartContainer for \"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\"" Sep 12 17:51:41.584863 containerd[1903]: time="2025-09-12T17:51:41.584816156Z" level=info msg="connecting to shim 737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" protocol=ttrpc version=3 Sep 12 17:51:41.614282 systemd[1]: Started cri-containerd-737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5.scope - libcontainer container 737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5. Sep 12 17:51:41.657382 containerd[1903]: time="2025-09-12T17:51:41.657025276Z" level=info msg="StartContainer for \"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\" returns successfully" Sep 12 17:51:41.665140 systemd[1]: cri-containerd-737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5.scope: Deactivated successfully. Sep 12 17:51:41.665404 systemd[1]: cri-containerd-737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5.scope: Consumed 24ms CPU time, 5.9M memory peak, 1.1M read from disk. Sep 12 17:51:41.668366 containerd[1903]: time="2025-09-12T17:51:41.668140850Z" level=info msg="received exit event container_id:\"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\" id:\"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\" pid:5279 exited_at:{seconds:1757699501 nanos:667834542}" Sep 12 17:51:41.668850 containerd[1903]: time="2025-09-12T17:51:41.668768096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\" id:\"737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5\" pid:5279 exited_at:{seconds:1757699501 nanos:667834542}" Sep 12 17:51:41.693812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737ee41408eedb4c81887ecfbab937efd1cb9e6d87ce019f758c406a11436ab5-rootfs.mount: Deactivated successfully. Sep 12 17:51:42.554764 containerd[1903]: time="2025-09-12T17:51:42.554712415Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:51:42.572332 containerd[1903]: time="2025-09-12T17:51:42.572204365Z" level=info msg="Container a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:42.589380 containerd[1903]: time="2025-09-12T17:51:42.589339953Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\"" Sep 12 17:51:42.590155 containerd[1903]: time="2025-09-12T17:51:42.590112727Z" level=info msg="StartContainer for \"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\"" Sep 12 17:51:42.591479 containerd[1903]: time="2025-09-12T17:51:42.591374177Z" level=info msg="connecting to shim a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" protocol=ttrpc version=3 Sep 12 17:51:42.614279 systemd[1]: Started cri-containerd-a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682.scope - libcontainer container a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682. Sep 12 17:51:42.647720 systemd[1]: cri-containerd-a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682.scope: Deactivated successfully. Sep 12 17:51:42.649674 containerd[1903]: time="2025-09-12T17:51:42.649183024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\" id:\"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\" pid:5321 exited_at:{seconds:1757699502 nanos:648904624}" Sep 12 17:51:42.653826 containerd[1903]: time="2025-09-12T17:51:42.653778389Z" level=info msg="received exit event container_id:\"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\" id:\"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\" pid:5321 exited_at:{seconds:1757699502 nanos:648904624}" Sep 12 17:51:42.663430 containerd[1903]: time="2025-09-12T17:51:42.663378745Z" level=info msg="StartContainer for \"a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682\" returns successfully" Sep 12 17:51:42.681864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9216e1891f7a43baf3b9b0b9f7ade769eee5cb4f6f7e4bfc2c82db29cabf682-rootfs.mount: Deactivated successfully. Sep 12 17:51:42.867156 kubelet[3219]: I0912 17:51:42.866995 3219 setters.go:618] "Node became not ready" node="ip-172-31-28-120" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:51:42Z","lastTransitionTime":"2025-09-12T17:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:51:43.562084 containerd[1903]: time="2025-09-12T17:51:43.561645279Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:51:43.594712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549044816.mount: Deactivated successfully. Sep 12 17:51:43.600495 containerd[1903]: time="2025-09-12T17:51:43.600452007Z" level=info msg="Container ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:43.608374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959084480.mount: Deactivated successfully. Sep 12 17:51:43.615033 containerd[1903]: time="2025-09-12T17:51:43.614974404Z" level=info msg="CreateContainer within sandbox \"d02851e777ab3ecbdc514e29e33d51291f72072ab0fdf3f5ad8b34ae12a155e2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\"" Sep 12 17:51:43.615586 containerd[1903]: time="2025-09-12T17:51:43.615564613Z" level=info msg="StartContainer for \"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\"" Sep 12 17:51:43.616786 containerd[1903]: time="2025-09-12T17:51:43.616719939Z" level=info msg="connecting to shim ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827" address="unix:///run/containerd/s/a2fc39db133b92159cf16d0f4dae9c9302d7e6857c61bcfcd1c32795146c613a" protocol=ttrpc version=3 Sep 12 17:51:43.639249 systemd[1]: Started cri-containerd-ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827.scope - libcontainer container ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827. Sep 12 17:51:43.679929 containerd[1903]: time="2025-09-12T17:51:43.679895780Z" level=info msg="StartContainer for \"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" returns successfully" Sep 12 17:51:43.815993 containerd[1903]: time="2025-09-12T17:51:43.815885594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"937a4b9ef4c31ad584228acc6f9d7ffa8d6da7085331495379147e07d1aef3dd\" pid:5386 exited_at:{seconds:1757699503 nanos:815547882}" Sep 12 17:51:43.891494 kubelet[3219]: E0912 17:51:43.891430 3219 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-dxn4f" podUID="4030bdec-e4a4-445f-8274-a09ecd080f50" Sep 12 17:51:44.473138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 17:51:44.592379 kubelet[3219]: I0912 17:51:44.592230 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c6rc4" podStartSLOduration=5.592209495 podStartE2EDuration="5.592209495s" podCreationTimestamp="2025-09-12 17:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:51:44.590329765 +0000 UTC m=+94.866861413" watchObservedRunningTime="2025-09-12 17:51:44.592209495 +0000 UTC m=+94.868741134" Sep 12 17:51:46.357238 containerd[1903]: time="2025-09-12T17:51:46.357189813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"8c31820f927179bcc3c8808c2249bfab78b2112737b47389c2f58ab143a5e22b\" pid:5531 exit_status:1 exited_at:{seconds:1757699506 nanos:355000762}" Sep 12 17:51:47.622987 (udev-worker)[5877]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:51:47.624386 systemd-networkd[1824]: lxc_health: Link UP Sep 12 17:51:47.632599 systemd-networkd[1824]: lxc_health: Gained carrier Sep 12 17:51:48.582205 containerd[1903]: time="2025-09-12T17:51:48.581961818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"6b6aeb3d43ecbd1b77974b8358a317f791506de98bc92175f900ed4b369ec1c4\" pid:5909 exited_at:{seconds:1757699508 nanos:581375603}" Sep 12 17:51:49.485378 systemd-networkd[1824]: lxc_health: Gained IPv6LL Sep 12 17:51:50.884695 containerd[1903]: time="2025-09-12T17:51:50.884645250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"abb3f38701472f7589c5fbda50a6ccb80c3b872d80b6515af20b94fff50edf91\" pid:5942 exited_at:{seconds:1757699510 nanos:884317010}" Sep 12 17:51:53.002394 containerd[1903]: time="2025-09-12T17:51:53.002342740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"3b5b2061fc9b949124cbff4d31e0190ba45333faf3449a8821fb5cc41ec8498c\" pid:5972 exited_at:{seconds:1757699513 nanos:1113929}" Sep 12 17:51:54.341453 ntpd[1854]: Listen normally on 14 lxc_health [fe80::b01f:b7ff:fed3:5b29%14]:123 Sep 12 17:51:54.341882 ntpd[1854]: 12 Sep 17:51:54 ntpd[1854]: Listen normally on 14 lxc_health [fe80::b01f:b7ff:fed3:5b29%14]:123 Sep 12 17:51:55.193465 containerd[1903]: time="2025-09-12T17:51:55.192773508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7ae2ffd1920b03bde1bef3df65fcbfbc07658104a3c47fda1d9941465fa827\" id:\"9f914f49364d582a02eba34dfeb4e09d94b367f3826fbae68d673b2d608d6bff\" pid:5995 exited_at:{seconds:1757699515 nanos:191108445}" Sep 12 17:51:55.221373 sshd[5214]: Connection closed by 139.178.68.195 port 51054 Sep 12 17:51:55.223022 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:55.227509 systemd[1]: sshd@25-172.31.28.120:22-139.178.68.195:51054.service: Deactivated successfully. Sep 12 17:51:55.234904 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:51:55.238739 systemd-logind[1859]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:51:55.240461 systemd-logind[1859]: Removed session 26. Sep 12 17:52:09.900220 containerd[1903]: time="2025-09-12T17:52:09.900075194Z" level=info msg="StopPodSandbox for \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\"" Sep 12 17:52:09.900637 containerd[1903]: time="2025-09-12T17:52:09.900279971Z" level=info msg="TearDown network for sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" successfully" Sep 12 17:52:09.900637 containerd[1903]: time="2025-09-12T17:52:09.900296143Z" level=info msg="StopPodSandbox for \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" returns successfully" Sep 12 17:52:09.901189 containerd[1903]: time="2025-09-12T17:52:09.901130587Z" level=info msg="RemovePodSandbox for \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\"" Sep 12 17:52:09.911155 containerd[1903]: time="2025-09-12T17:52:09.911097685Z" level=info msg="Forcibly stopping sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\"" Sep 12 17:52:09.911357 containerd[1903]: time="2025-09-12T17:52:09.911288092Z" level=info msg="TearDown network for sandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" successfully" Sep 12 17:52:09.916053 containerd[1903]: time="2025-09-12T17:52:09.915674557Z" level=info msg="Ensure that sandbox abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9 in task-service has been cleanup successfully" Sep 12 17:52:09.922431 containerd[1903]: time="2025-09-12T17:52:09.922369966Z" level=info msg="RemovePodSandbox \"abd65360feec8deaec2cc1b32828d1acf008d3d6fc30b72a48e7a25c34d8b0e9\" returns successfully" Sep 12 17:52:09.922918 containerd[1903]: time="2025-09-12T17:52:09.922887367Z" level=info msg="StopPodSandbox for \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\"" Sep 12 17:52:09.923074 containerd[1903]: time="2025-09-12T17:52:09.923031921Z" level=info msg="TearDown network for sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" successfully" Sep 12 17:52:09.923074 containerd[1903]: time="2025-09-12T17:52:09.923069579Z" level=info msg="StopPodSandbox for \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" returns successfully" Sep 12 17:52:09.923539 containerd[1903]: time="2025-09-12T17:52:09.923500880Z" level=info msg="RemovePodSandbox for \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\"" Sep 12 17:52:09.923539 containerd[1903]: time="2025-09-12T17:52:09.923535822Z" level=info msg="Forcibly stopping sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\"" Sep 12 17:52:09.923652 containerd[1903]: time="2025-09-12T17:52:09.923641867Z" level=info msg="TearDown network for sandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" successfully" Sep 12 17:52:09.925654 containerd[1903]: time="2025-09-12T17:52:09.925618175Z" level=info msg="Ensure that sandbox a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399 in task-service has been cleanup successfully" Sep 12 17:52:09.931947 containerd[1903]: time="2025-09-12T17:52:09.931903802Z" level=info msg="RemovePodSandbox \"a345dacb017bc10dfffe1072f51fc4c7bcf7db0d880128c776e814c007d8e399\" returns successfully" Sep 12 17:52:21.276516 systemd[1]: cri-containerd-0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f.scope: Deactivated successfully. Sep 12 17:52:21.277527 systemd[1]: cri-containerd-0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f.scope: Consumed 3.442s CPU time, 75.9M memory peak, 27.3M read from disk. Sep 12 17:52:21.280019 containerd[1903]: time="2025-09-12T17:52:21.279980014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\" id:\"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\" pid:3050 exit_status:1 exited_at:{seconds:1757699541 nanos:279595445}" Sep 12 17:52:21.281044 containerd[1903]: time="2025-09-12T17:52:21.280265166Z" level=info msg="received exit event container_id:\"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\" id:\"0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f\" pid:3050 exit_status:1 exited_at:{seconds:1757699541 nanos:279595445}" Sep 12 17:52:21.306260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f-rootfs.mount: Deactivated successfully. Sep 12 17:52:21.608892 kubelet[3219]: E0912 17:52:21.608751 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": context deadline exceeded" Sep 12 17:52:21.665453 kubelet[3219]: I0912 17:52:21.665428 3219 scope.go:117] "RemoveContainer" containerID="0c691dface748b8a4511072788faf84fe76430136cd366edfa9fdb7ef654d30f" Sep 12 17:52:21.668237 containerd[1903]: time="2025-09-12T17:52:21.667869435Z" level=info msg="CreateContainer within sandbox \"39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:52:21.687679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221881079.mount: Deactivated successfully. Sep 12 17:52:21.690077 containerd[1903]: time="2025-09-12T17:52:21.688223689Z" level=info msg="Container adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:52:21.704610 containerd[1903]: time="2025-09-12T17:52:21.704568613Z" level=info msg="CreateContainer within sandbox \"39dd71cba3486288bf2058b78cf278b5d038d2c771ee471482f9355e0bd32009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4\"" Sep 12 17:52:21.705199 containerd[1903]: time="2025-09-12T17:52:21.705145934Z" level=info msg="StartContainer for \"adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4\"" Sep 12 17:52:21.706267 containerd[1903]: time="2025-09-12T17:52:21.706234587Z" level=info msg="connecting to shim adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4" address="unix:///run/containerd/s/4eccce1c84bc67c5d166b8214196eb387b9f082d156f2a86cebce798d7de96d8" protocol=ttrpc version=3 Sep 12 17:52:21.741449 systemd[1]: Started cri-containerd-adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4.scope - libcontainer container adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4. Sep 12 17:52:21.802694 containerd[1903]: time="2025-09-12T17:52:21.802648707Z" level=info msg="StartContainer for \"adfc100b50ded14fdab727f1aca06702d3826b051309c138ebb89f3da6fda2c4\" returns successfully" Sep 12 17:52:25.726915 systemd[1]: cri-containerd-36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2.scope: Deactivated successfully. Sep 12 17:52:25.728459 systemd[1]: cri-containerd-36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2.scope: Consumed 2.584s CPU time, 30.2M memory peak, 12.6M read from disk. Sep 12 17:52:25.730277 containerd[1903]: time="2025-09-12T17:52:25.730125423Z" level=info msg="received exit event container_id:\"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\" id:\"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\" pid:3058 exit_status:1 exited_at:{seconds:1757699545 nanos:729467354}" Sep 12 17:52:25.730956 containerd[1903]: time="2025-09-12T17:52:25.730777025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\" id:\"36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2\" pid:3058 exit_status:1 exited_at:{seconds:1757699545 nanos:729467354}" Sep 12 17:52:25.790361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2-rootfs.mount: Deactivated successfully. Sep 12 17:52:26.680264 kubelet[3219]: I0912 17:52:26.680150 3219 scope.go:117] "RemoveContainer" containerID="36f73b3f518c429ded194b75ec18c2ce527b6de12544d5e245524ae26efeb7a2" Sep 12 17:52:26.682612 containerd[1903]: time="2025-09-12T17:52:26.682573267Z" level=info msg="CreateContainer within sandbox \"09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:52:26.701179 containerd[1903]: time="2025-09-12T17:52:26.700765811Z" level=info msg="Container 80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:52:26.717431 containerd[1903]: time="2025-09-12T17:52:26.717373415Z" level=info msg="CreateContainer within sandbox \"09457f692184c2c3bfb184549c5fcaf14a34dcda88ff79c1f0714eeb2ffe4477\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325\"" Sep 12 17:52:26.718592 containerd[1903]: time="2025-09-12T17:52:26.718544654Z" level=info msg="StartContainer for \"80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325\"" Sep 12 17:52:26.727825 containerd[1903]: time="2025-09-12T17:52:26.727764356Z" level=info msg="connecting to shim 80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325" address="unix:///run/containerd/s/bfd02d89c92ff2b34e58e8dfbc9a28af32e40f39952a2dc6137c18ef6c5728fc" protocol=ttrpc version=3 Sep 12 17:52:26.755309 systemd[1]: Started cri-containerd-80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325.scope - libcontainer container 80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325. Sep 12 17:52:26.810908 containerd[1903]: time="2025-09-12T17:52:26.810869392Z" level=info msg="StartContainer for \"80059111c718faa4aee5c66f3ce1d3d02ac4715111c82a051f3e3ee1243c3325\" returns successfully" Sep 12 17:52:31.609925 kubelet[3219]: E0912 17:52:31.609859 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"