Sep 12 17:42:53.927439 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:42:53.927479 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:42:53.927494 kernel: BIOS-provided physical RAM map: Sep 12 17:42:53.927506 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:42:53.927517 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:42:53.927528 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:42:53.927543 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:42:53.927554 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:42:53.927569 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:42:53.927581 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:42:53.927594 kernel: NX (Execute Disable) protection: active Sep 12 17:42:53.927606 kernel: APIC: Static calls initialized Sep 12 17:42:53.927618 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:42:53.927631 kernel: extended physical RAM map: Sep 12 17:42:53.927649 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:42:53.927662 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 17:42:53.927676 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 17:42:53.927689 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 17:42:53.927703 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:42:53.927716 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:42:53.927729 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:42:53.927743 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:42:53.927756 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:42:53.927769 kernel: efi: EFI v2.7 by EDK II Sep 12 17:42:53.927785 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:42:53.927799 kernel: secureboot: Secure boot disabled Sep 12 17:42:53.927812 kernel: SMBIOS 2.7 present. Sep 12 17:42:53.927825 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:42:53.927838 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:42:53.927851 kernel: Hypervisor detected: KVM Sep 12 17:42:53.927865 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:42:53.927878 kernel: kvm-clock: using sched offset of 5191805611 cycles Sep 12 17:42:53.927892 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:42:53.927906 kernel: tsc: Detected 2500.004 MHz processor Sep 12 17:42:53.927920 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:42:53.927937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:42:53.927951 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:42:53.927974 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:42:53.927988 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:42:53.928001 kernel: Using GB pages for direct mapping Sep 12 17:42:53.928021 kernel: ACPI: Early table checksum verification disabled Sep 12 17:42:53.928038 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:42:53.928053 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:42:53.928069 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:42:53.928084 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:42:53.928097 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:42:53.928112 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:42:53.928127 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:42:53.928141 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:42:53.928158 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:42:53.928173 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:42:53.928187 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:42:53.928202 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:42:53.928216 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:42:53.928231 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:42:53.928246 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:42:53.928260 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:42:53.928277 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:42:53.928292 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:42:53.929335 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:42:53.929361 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:42:53.929375 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:42:53.929386 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:42:53.929398 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:42:53.929412 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:42:53.929424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:42:53.929436 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:42:53.929455 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 12 17:42:53.929470 kernel: Zone ranges: Sep 12 17:42:53.929483 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:42:53.929498 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:42:53.929512 kernel: Normal empty Sep 12 17:42:53.929526 kernel: Device empty Sep 12 17:42:53.929540 kernel: Movable zone start for each node Sep 12 17:42:53.929554 kernel: Early memory node ranges Sep 12 17:42:53.929568 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:42:53.929585 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:42:53.929599 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:42:53.929613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:42:53.929627 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:42:53.929641 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:42:53.929655 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:42:53.929670 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:42:53.929684 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:42:53.929698 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:42:53.929715 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:42:53.929729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:42:53.929743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:42:53.929758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:42:53.929772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:42:53.929787 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:42:53.929801 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:42:53.929814 kernel: TSC deadline timer available Sep 12 17:42:53.929829 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:42:53.929842 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:42:53.929860 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:42:53.929874 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:42:53.929887 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:42:53.929901 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:42:53.929915 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:42:53.929929 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:42:53.929944 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:42:53.929958 kernel: Booting paravirtualized kernel on KVM Sep 12 17:42:53.929972 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:42:53.929990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:42:53.930004 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:42:53.930019 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:42:53.930031 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:42:53.930043 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:42:53.930054 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:42:53.930068 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:42:53.930081 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:42:53.930097 kernel: random: crng init done Sep 12 17:42:53.930108 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:42:53.930121 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:42:53.930133 kernel: Fallback order for Node 0: 0 Sep 12 17:42:53.930145 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 12 17:42:53.930158 kernel: Policy zone: DMA32 Sep 12 17:42:53.930184 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:42:53.930199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:42:53.930213 kernel: Kernel/User page tables isolation: enabled Sep 12 17:42:53.930227 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:42:53.930241 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:42:53.930255 kernel: Dynamic Preempt: voluntary Sep 12 17:42:53.930268 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:42:53.930282 kernel: rcu: RCU event tracing is enabled. Sep 12 17:42:53.930295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:42:53.930334 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:42:53.930348 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:42:53.930364 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:42:53.930377 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:42:53.930390 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:42:53.930404 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:42:53.930420 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:42:53.930436 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:42:53.930450 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:42:53.930466 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:42:53.930485 kernel: Console: colour dummy device 80x25 Sep 12 17:42:53.930499 kernel: printk: legacy console [tty0] enabled Sep 12 17:42:53.930515 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:42:53.930530 kernel: ACPI: Core revision 20240827 Sep 12 17:42:53.930546 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:42:53.930562 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:42:53.930577 kernel: x2apic enabled Sep 12 17:42:53.930593 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:42:53.930609 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 12 17:42:53.930628 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 12 17:42:53.930643 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:42:53.930659 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:42:53.930675 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:42:53.930690 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:42:53.930706 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:42:53.930722 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:42:53.930737 kernel: RETBleed: Vulnerable Sep 12 17:42:53.930753 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:42:53.930768 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:42:53.930783 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:42:53.930801 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:42:53.930816 kernel: active return thunk: its_return_thunk Sep 12 17:42:53.930831 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:42:53.930847 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:42:53.930863 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:42:53.930879 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:42:53.930894 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:42:53.930910 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:42:53.930926 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:42:53.930942 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:42:53.930957 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:42:53.930976 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:42:53.930991 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:42:53.931007 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:42:53.931022 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:42:53.931038 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:42:53.931054 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:42:53.931069 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:42:53.931085 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:42:53.931101 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:42:53.931117 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:42:53.931132 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:42:53.931151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:42:53.931166 kernel: landlock: Up and running. Sep 12 17:42:53.931182 kernel: SELinux: Initializing. Sep 12 17:42:53.931198 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:42:53.931213 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:42:53.931229 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:42:53.931245 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:42:53.931261 kernel: signal: max sigframe size: 3632 Sep 12 17:42:53.931277 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:42:53.931293 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:42:53.933352 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:42:53.933375 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:42:53.933392 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:42:53.933407 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:42:53.933423 kernel: .... node #0, CPUs: #1 Sep 12 17:42:53.933438 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:42:53.933454 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:42:53.933469 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:42:53.933484 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 12 17:42:53.933506 kernel: Memory: 1908060K/2037804K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125188K reserved, 0K cma-reserved) Sep 12 17:42:53.933521 kernel: devtmpfs: initialized Sep 12 17:42:53.933537 kernel: x86/mm: Memory block size: 128MB Sep 12 17:42:53.933552 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:42:53.933568 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:42:53.933583 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:42:53.933598 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:42:53.933613 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:42:53.933629 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:42:53.933647 kernel: audit: type=2000 audit(1757698971.828:1): state=initialized audit_enabled=0 res=1 Sep 12 17:42:53.933662 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:42:53.933677 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:42:53.933692 kernel: cpuidle: using governor menu Sep 12 17:42:53.933707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:42:53.933722 kernel: dca service started, version 1.12.1 Sep 12 17:42:53.933737 kernel: PCI: Using configuration type 1 for base access Sep 12 17:42:53.933753 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:42:53.933770 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:42:53.933786 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:42:53.933802 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:42:53.933817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:42:53.933832 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:42:53.933848 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:42:53.933863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:42:53.933878 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:42:53.933893 kernel: ACPI: Interpreter enabled Sep 12 17:42:53.933908 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:42:53.933926 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:42:53.933942 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:42:53.933957 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:42:53.933973 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:42:53.933988 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:42:53.934229 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:42:53.935423 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:42:53.935582 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:42:53.935601 kernel: acpiphp: Slot [3] registered Sep 12 17:42:53.935615 kernel: acpiphp: Slot [4] registered Sep 12 17:42:53.935629 kernel: acpiphp: Slot [5] registered Sep 12 17:42:53.935644 kernel: acpiphp: Slot [6] registered Sep 12 17:42:53.935658 kernel: acpiphp: Slot [7] registered Sep 12 17:42:53.935671 kernel: acpiphp: Slot [8] registered Sep 12 17:42:53.935686 kernel: acpiphp: Slot [9] registered Sep 12 17:42:53.935700 kernel: acpiphp: Slot [10] registered Sep 12 17:42:53.935717 kernel: acpiphp: Slot [11] registered Sep 12 17:42:53.935730 kernel: acpiphp: Slot [12] registered Sep 12 17:42:53.935744 kernel: acpiphp: Slot [13] registered Sep 12 17:42:53.935758 kernel: acpiphp: Slot [14] registered Sep 12 17:42:53.935772 kernel: acpiphp: Slot [15] registered Sep 12 17:42:53.935785 kernel: acpiphp: Slot [16] registered Sep 12 17:42:53.935799 kernel: acpiphp: Slot [17] registered Sep 12 17:42:53.935813 kernel: acpiphp: Slot [18] registered Sep 12 17:42:53.935827 kernel: acpiphp: Slot [19] registered Sep 12 17:42:53.935840 kernel: acpiphp: Slot [20] registered Sep 12 17:42:53.935857 kernel: acpiphp: Slot [21] registered Sep 12 17:42:53.935871 kernel: acpiphp: Slot [22] registered Sep 12 17:42:53.935885 kernel: acpiphp: Slot [23] registered Sep 12 17:42:53.935899 kernel: acpiphp: Slot [24] registered Sep 12 17:42:53.935912 kernel: acpiphp: Slot [25] registered Sep 12 17:42:53.935926 kernel: acpiphp: Slot [26] registered Sep 12 17:42:53.935940 kernel: acpiphp: Slot [27] registered Sep 12 17:42:53.935953 kernel: acpiphp: Slot [28] registered Sep 12 17:42:53.935977 kernel: acpiphp: Slot [29] registered Sep 12 17:42:53.935993 kernel: acpiphp: Slot [30] registered Sep 12 17:42:53.936006 kernel: acpiphp: Slot [31] registered Sep 12 17:42:53.936019 kernel: PCI host bridge to bus 0000:00 Sep 12 17:42:53.936158 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:42:53.936279 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:42:53.937505 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:42:53.937645 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:42:53.937768 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:42:53.937895 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:42:53.938052 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:42:53.938204 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:42:53.938372 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 12 17:42:53.938512 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:42:53.938653 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:42:53.938788 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:42:53.938925 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:42:53.939059 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:42:53.939194 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:42:53.941186 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:42:53.941395 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:42:53.941543 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 12 17:42:53.941688 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:42:53.941821 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:42:53.941997 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 12 17:42:53.942171 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 12 17:42:53.942331 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 12 17:42:53.943649 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 12 17:42:53.943677 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:42:53.943694 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:42:53.943710 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:42:53.943726 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:42:53.943741 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:42:53.943757 kernel: iommu: Default domain type: Translated Sep 12 17:42:53.943773 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:42:53.943789 kernel: efivars: Registered efivars operations Sep 12 17:42:53.943804 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:42:53.943823 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:42:53.943839 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 17:42:53.943854 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:42:53.943868 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:42:53.944027 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:42:53.944165 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:42:53.944303 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:42:53.946361 kernel: vgaarb: loaded Sep 12 17:42:53.946379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:42:53.946400 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:42:53.946416 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:42:53.946432 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:42:53.946448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:42:53.946464 kernel: pnp: PnP ACPI init Sep 12 17:42:53.946480 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:42:53.946496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:42:53.946512 kernel: NET: Registered PF_INET protocol family Sep 12 17:42:53.946528 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:42:53.946547 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:42:53.946563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:42:53.946579 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:42:53.946595 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:42:53.946611 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:42:53.946626 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:42:53.946642 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:42:53.946658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:42:53.946677 kernel: NET: Registered PF_XDP protocol family Sep 12 17:42:53.946834 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:42:53.946960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:42:53.947084 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:42:53.947206 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:42:53.947347 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:42:53.947495 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:42:53.947516 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:42:53.947536 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:42:53.947552 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 12 17:42:53.947568 kernel: clocksource: Switched to clocksource tsc Sep 12 17:42:53.947584 kernel: Initialise system trusted keyrings Sep 12 17:42:53.947600 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:42:53.947616 kernel: Key type asymmetric registered Sep 12 17:42:53.947631 kernel: Asymmetric key parser 'x509' registered Sep 12 17:42:53.947647 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:42:53.947663 kernel: io scheduler mq-deadline registered Sep 12 17:42:53.947682 kernel: io scheduler kyber registered Sep 12 17:42:53.947698 kernel: io scheduler bfq registered Sep 12 17:42:53.947714 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:42:53.947730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:42:53.947746 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:42:53.947762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:42:53.947778 kernel: i8042: Warning: Keylock active Sep 12 17:42:53.947793 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:42:53.947809 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:42:53.947971 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:42:53.948105 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:42:53.948233 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:42:53 UTC (1757698973) Sep 12 17:42:53.949679 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:42:53.949727 kernel: intel_pstate: CPU model not supported Sep 12 17:42:53.949745 kernel: efifb: probing for efifb Sep 12 17:42:53.949760 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 17:42:53.949775 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:42:53.949795 kernel: efifb: scrolling: redraw Sep 12 17:42:53.949812 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:42:53.949833 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:42:53.949851 kernel: fb0: EFI VGA frame buffer device Sep 12 17:42:53.949868 kernel: pstore: Using crash dump compression: deflate Sep 12 17:42:53.949886 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:42:53.949904 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:42:53.949922 kernel: Segment Routing with IPv6 Sep 12 17:42:53.949940 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:42:53.949961 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:42:53.949979 kernel: Key type dns_resolver registered Sep 12 17:42:53.949997 kernel: IPI shorthand broadcast: enabled Sep 12 17:42:53.950014 kernel: sched_clock: Marking stable (2671003290, 154674188)->(2914292097, -88614619) Sep 12 17:42:53.950032 kernel: registered taskstats version 1 Sep 12 17:42:53.950050 kernel: Loading compiled-in X.509 certificates Sep 12 17:42:53.950068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:42:53.950085 kernel: Demotion targets for Node 0: null Sep 12 17:42:53.950102 kernel: Key type .fscrypt registered Sep 12 17:42:53.950123 kernel: Key type fscrypt-provisioning registered Sep 12 17:42:53.950141 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:42:53.950158 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:42:53.950175 kernel: ima: No architecture policies found Sep 12 17:42:53.950189 kernel: clk: Disabling unused clocks Sep 12 17:42:53.950206 kernel: Warning: unable to open an initial console. Sep 12 17:42:53.950222 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:42:53.950236 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:42:53.950252 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:42:53.950270 kernel: Run /init as init process Sep 12 17:42:53.950285 kernel: with arguments: Sep 12 17:42:53.950299 kernel: /init Sep 12 17:42:53.951350 kernel: with environment: Sep 12 17:42:53.951372 kernel: HOME=/ Sep 12 17:42:53.951388 kernel: TERM=linux Sep 12 17:42:53.951409 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:42:53.951427 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:42:53.951448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:42:53.951465 systemd[1]: Detected virtualization amazon. Sep 12 17:42:53.951481 systemd[1]: Detected architecture x86-64. Sep 12 17:42:53.951497 systemd[1]: Running in initrd. Sep 12 17:42:53.951513 systemd[1]: No hostname configured, using default hostname. Sep 12 17:42:53.951533 systemd[1]: Hostname set to . Sep 12 17:42:53.951550 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:42:53.951566 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:42:53.951582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:42:53.951599 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:42:53.951617 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:42:53.951634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:42:53.951651 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:42:53.951672 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:42:53.951690 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:42:53.951708 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:42:53.951727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:42:53.951744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:42:53.951761 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:42:53.951780 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:42:53.951797 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:42:53.951815 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:42:53.951832 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:42:53.951851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:42:53.951869 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:42:53.951887 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:42:53.951908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:42:53.951927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:42:53.951949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:42:53.951979 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:42:53.951998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:42:53.952016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:42:53.952035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:42:53.952054 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:42:53.952072 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:42:53.952090 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:42:53.952110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:42:53.952127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:42:53.952144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:42:53.952162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:42:53.952216 systemd-journald[207]: Collecting audit messages is disabled. Sep 12 17:42:53.952258 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:42:53.952275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:42:53.952294 systemd-journald[207]: Journal started Sep 12 17:42:53.953363 systemd-journald[207]: Runtime Journal (/run/log/journal/ec29608a1bf0a191d0b546e3c716b15d) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:42:53.911779 systemd-modules-load[208]: Inserted module 'overlay' Sep 12 17:42:53.960575 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:42:53.962268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:42:53.967451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:42:53.975600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:42:53.975644 kernel: Bridge firewalling registered Sep 12 17:42:53.971224 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 12 17:42:53.979462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:42:53.983158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:42:53.987566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:42:53.994492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:42:53.997945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:42:54.000611 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:42:54.009362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:42:54.009957 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:42:54.021607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:42:54.026507 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:42:54.028879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:42:54.035349 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:42:54.038608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:42:54.060040 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:42:54.089366 systemd-resolved[245]: Positive Trust Anchors: Sep 12 17:42:54.089382 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:42:54.089439 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:42:54.099566 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 12 17:42:54.101061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:42:54.102658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:42:54.159347 kernel: SCSI subsystem initialized Sep 12 17:42:54.168406 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:42:54.180734 kernel: iscsi: registered transport (tcp) Sep 12 17:42:54.202729 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:42:54.202814 kernel: QLogic iSCSI HBA Driver Sep 12 17:42:54.229880 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:42:54.246085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:42:54.247138 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:42:54.294175 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:42:54.296497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:42:54.348367 kernel: raid6: avx512x4 gen() 18032 MB/s Sep 12 17:42:54.366341 kernel: raid6: avx512x2 gen() 17875 MB/s Sep 12 17:42:54.384365 kernel: raid6: avx512x1 gen() 17787 MB/s Sep 12 17:42:54.402339 kernel: raid6: avx2x4 gen() 17727 MB/s Sep 12 17:42:54.420357 kernel: raid6: avx2x2 gen() 17585 MB/s Sep 12 17:42:54.438546 kernel: raid6: avx2x1 gen() 13877 MB/s Sep 12 17:42:54.438605 kernel: raid6: using algorithm avx512x4 gen() 18032 MB/s Sep 12 17:42:54.457512 kernel: raid6: .... xor() 7620 MB/s, rmw enabled Sep 12 17:42:54.457588 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:42:54.478344 kernel: xor: automatically using best checksumming function avx Sep 12 17:42:54.648345 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:42:54.655329 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:42:54.657728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:42:54.689541 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 12 17:42:54.696526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:42:54.700668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:42:54.733032 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 12 17:42:54.737407 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:42:54.761716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:42:54.763913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:42:54.830062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:42:54.834552 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:42:54.930878 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:42:54.931158 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:42:54.937337 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:42:54.940141 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:42:54.945330 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:42:54.950674 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:42:54.953333 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:42:54.968925 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:42:54.968988 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:75:9b:50:02:77 Sep 12 17:42:54.969248 kernel: GPT:9289727 != 16777215 Sep 12 17:42:54.969271 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:42:54.969292 kernel: GPT:9289727 != 16777215 Sep 12 17:42:54.969338 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:42:54.969358 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:42:54.968575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:42:54.968807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:42:54.978589 (udev-worker)[513]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:42:54.979110 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:42:54.982981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:42:54.986820 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:42:54.992879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:42:54.994417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:42:54.999433 kernel: AES CTR mode by8 optimization enabled Sep 12 17:42:55.007099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:42:55.041356 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:42:55.056550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:42:55.179477 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:42:55.204212 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:42:55.205119 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:42:55.217629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:42:55.227864 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:42:55.228483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:42:55.230046 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:42:55.231126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:42:55.232389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:42:55.234134 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:42:55.239510 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:42:55.259120 disk-uuid[688]: Primary Header is updated. Sep 12 17:42:55.259120 disk-uuid[688]: Secondary Entries is updated. Sep 12 17:42:55.259120 disk-uuid[688]: Secondary Header is updated. Sep 12 17:42:55.266858 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:42:55.270262 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:42:55.283484 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:42:56.278573 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:42:56.279482 disk-uuid[690]: The operation has completed successfully. Sep 12 17:42:56.423171 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:42:56.423321 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:42:56.462530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:42:56.476988 sh[954]: Success Sep 12 17:42:56.498775 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:42:56.498859 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:42:56.499598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:42:56.512327 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 17:42:56.609532 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:42:56.614415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:42:56.627146 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:42:56.652358 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Sep 12 17:42:56.656193 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:42:56.656257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:42:56.689324 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:42:56.689422 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:42:56.689444 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:42:56.693971 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:42:56.694859 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:42:56.695395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:42:56.696296 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:42:56.698359 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:42:56.739341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1011) Sep 12 17:42:56.744341 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:42:56.744416 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:42:56.752861 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:42:56.752939 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:42:56.762670 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:42:56.764094 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:42:56.766499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:42:56.815914 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:42:56.819369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:42:56.871674 systemd-networkd[1146]: lo: Link UP Sep 12 17:42:56.871689 systemd-networkd[1146]: lo: Gained carrier Sep 12 17:42:56.873582 systemd-networkd[1146]: Enumeration completed Sep 12 17:42:56.873719 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:42:56.874464 systemd[1]: Reached target network.target - Network. Sep 12 17:42:56.875014 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:42:56.875018 systemd-networkd[1146]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:42:56.878518 systemd-networkd[1146]: eth0: Link UP Sep 12 17:42:56.878524 systemd-networkd[1146]: eth0: Gained carrier Sep 12 17:42:56.878544 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:42:56.891526 systemd-networkd[1146]: eth0: DHCPv4 address 172.31.16.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:42:57.282704 ignition[1085]: Ignition 2.21.0 Sep 12 17:42:57.282723 ignition[1085]: Stage: fetch-offline Sep 12 17:42:57.282945 ignition[1085]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:57.282958 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:57.283554 ignition[1085]: Ignition finished successfully Sep 12 17:42:57.285480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:42:57.287784 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:42:57.317801 ignition[1159]: Ignition 2.21.0 Sep 12 17:42:57.317817 ignition[1159]: Stage: fetch Sep 12 17:42:57.318191 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:57.318204 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:57.318338 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:57.363176 ignition[1159]: PUT result: OK Sep 12 17:42:57.370033 ignition[1159]: parsed url from cmdline: "" Sep 12 17:42:57.370159 ignition[1159]: no config URL provided Sep 12 17:42:57.370195 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:42:57.370208 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:42:57.370236 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:57.372272 ignition[1159]: PUT result: OK Sep 12 17:42:57.372347 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:42:57.374703 ignition[1159]: GET result: OK Sep 12 17:42:57.374828 ignition[1159]: parsing config with SHA512: 7277ae1a386290ae3ae70c01cdf080bff347b9bdb7c765e1b054b5aea0f60c34470d6fc1069ecb0ff61837bdc1a1e5417e3eeac146b6007d66cf652581c1de36 Sep 12 17:42:57.382481 unknown[1159]: fetched base config from "system" Sep 12 17:42:57.382902 ignition[1159]: fetch: fetch complete Sep 12 17:42:57.382497 unknown[1159]: fetched base config from "system" Sep 12 17:42:57.382908 ignition[1159]: fetch: fetch passed Sep 12 17:42:57.382506 unknown[1159]: fetched user config from "aws" Sep 12 17:42:57.382952 ignition[1159]: Ignition finished successfully Sep 12 17:42:57.387546 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:42:57.389140 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:42:57.421389 ignition[1165]: Ignition 2.21.0 Sep 12 17:42:57.421406 ignition[1165]: Stage: kargs Sep 12 17:42:57.422699 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:57.422713 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:57.422878 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:57.425597 ignition[1165]: PUT result: OK Sep 12 17:42:57.428483 ignition[1165]: kargs: kargs passed Sep 12 17:42:57.428560 ignition[1165]: Ignition finished successfully Sep 12 17:42:57.430270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:42:57.432258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:42:57.461831 ignition[1172]: Ignition 2.21.0 Sep 12 17:42:57.461850 ignition[1172]: Stage: disks Sep 12 17:42:57.462204 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:57.462217 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:57.462348 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:57.463840 ignition[1172]: PUT result: OK Sep 12 17:42:57.468890 ignition[1172]: disks: disks passed Sep 12 17:42:57.468969 ignition[1172]: Ignition finished successfully Sep 12 17:42:57.470582 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:42:57.471615 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:42:57.472427 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:42:57.472777 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:42:57.473372 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:42:57.473936 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:42:57.475682 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:42:57.520184 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:42:57.522772 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:42:57.525340 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:42:57.661351 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:42:57.661957 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:42:57.662993 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:42:57.664866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:42:57.667420 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:42:57.668667 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:42:57.669074 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:42:57.669099 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:42:57.688747 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:42:57.691006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:42:57.706341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Sep 12 17:42:57.711008 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:42:57.711078 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:42:57.718658 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:42:57.718728 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:42:57.721257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:42:57.964106 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:42:57.969714 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:42:57.974206 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:42:57.978580 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:42:58.165534 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:42:58.167723 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:42:58.170500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:42:58.191139 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:42:58.193355 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:42:58.226678 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:42:58.231377 ignition[1313]: INFO : Ignition 2.21.0 Sep 12 17:42:58.231377 ignition[1313]: INFO : Stage: mount Sep 12 17:42:58.233051 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:58.233051 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:58.233051 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:58.234539 ignition[1313]: INFO : PUT result: OK Sep 12 17:42:58.235736 ignition[1313]: INFO : mount: mount passed Sep 12 17:42:58.236392 ignition[1313]: INFO : Ignition finished successfully Sep 12 17:42:58.237643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:42:58.239539 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:42:58.652516 systemd-networkd[1146]: eth0: Gained IPv6LL Sep 12 17:42:58.663777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:42:58.704365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Sep 12 17:42:58.707496 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:42:58.707567 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:42:58.716358 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:42:58.716438 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:42:58.719486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:42:58.767093 ignition[1342]: INFO : Ignition 2.21.0 Sep 12 17:42:58.767093 ignition[1342]: INFO : Stage: files Sep 12 17:42:58.768862 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:42:58.768862 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:42:58.768862 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:42:58.768862 ignition[1342]: INFO : PUT result: OK Sep 12 17:42:58.771442 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:42:58.772323 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:42:58.772323 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:42:58.776820 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:42:58.777829 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:42:58.777829 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:42:58.777365 unknown[1342]: wrote ssh authorized keys file for user: core Sep 12 17:42:58.780931 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:42:58.781743 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:42:58.946696 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:42:59.336550 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:42:59.336550 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:42:59.338282 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:42:59.534032 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:42:59.634216 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:42:59.634216 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:42:59.638015 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:42:59.644618 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:42:59.644618 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:42:59.644618 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:42:59.644618 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:42:59.644618 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:42:59.649008 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:42:59.973589 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:43:00.779897 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:43:00.779897 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:00.788551 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:00.828933 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:00.828933 ignition[1342]: INFO : files: files passed Sep 12 17:43:00.828933 ignition[1342]: INFO : Ignition finished successfully Sep 12 17:43:00.790880 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:43:00.799638 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:43:00.816606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:43:00.873502 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:43:00.873966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:43:00.909658 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:00.924862 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:00.926258 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:00.927015 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:00.936688 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:43:00.939040 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:43:01.091505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:43:01.091653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:43:01.093027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:43:01.102742 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:43:01.103760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:43:01.105032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:43:01.161443 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:01.172986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:43:01.248521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:01.249351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:01.253598 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:43:01.254581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:43:01.254830 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:01.261246 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:43:01.265863 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:43:01.266873 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:43:01.272652 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:01.273692 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:01.280984 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:43:01.293031 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:43:01.293954 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:01.304695 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:43:01.305971 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:43:01.306890 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:43:01.312265 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:43:01.312495 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:01.314767 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:01.316514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:01.318123 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:43:01.318331 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:01.320479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:43:01.320716 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:01.334977 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:43:01.335211 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:01.341846 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:43:01.342038 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:43:01.345591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:43:01.356298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:43:01.362066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:43:01.362603 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:01.370979 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:43:01.371863 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:01.418927 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:43:01.419068 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:43:01.507990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:43:01.528356 ignition[1397]: INFO : Ignition 2.21.0 Sep 12 17:43:01.530473 ignition[1397]: INFO : Stage: umount Sep 12 17:43:01.530473 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:01.530473 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:01.530473 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:01.533262 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:43:01.535469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:43:01.546427 ignition[1397]: INFO : PUT result: OK Sep 12 17:43:01.557524 ignition[1397]: INFO : umount: umount passed Sep 12 17:43:01.558266 ignition[1397]: INFO : Ignition finished successfully Sep 12 17:43:01.566120 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:43:01.566283 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:43:01.568493 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:43:01.568622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:43:01.572558 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:43:01.572650 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:43:01.573445 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:43:01.573518 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:43:01.575300 systemd[1]: Stopped target network.target - Network. Sep 12 17:43:01.581050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:43:01.581154 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:01.591508 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:43:01.592245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:43:01.595421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:01.595858 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:43:01.599829 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:43:01.604393 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:43:01.604464 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:01.608567 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:43:01.608619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:01.609061 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:43:01.609134 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:43:01.611411 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:43:01.611488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:01.617506 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:43:01.617612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:01.622794 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:43:01.624641 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:43:01.637754 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:43:01.637946 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:43:01.642827 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:43:01.643160 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:43:01.643322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:43:01.645893 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:43:01.647131 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:43:01.648404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:43:01.648458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:01.650627 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:43:01.651031 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:43:01.651106 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:01.654494 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:43:01.654571 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:01.656595 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:43:01.656773 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:01.658095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:43:01.658176 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:01.659084 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:01.664113 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:43:01.664219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:43:01.684700 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:43:01.695032 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:01.696543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:43:01.696639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:01.706656 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:43:01.706714 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:01.711203 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:43:01.711303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:01.714474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:43:01.714561 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:01.718879 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:01.718980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:01.726413 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:43:01.727153 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:43:01.727240 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:43:01.729469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:43:01.729545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:01.731448 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:43:01.731519 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:01.738087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:43:01.738171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:01.740379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:01.740463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:01.748443 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:43:01.748553 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:43:01.748607 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:43:01.748669 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:43:01.749154 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:43:01.755456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:43:01.778644 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:43:01.778809 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:43:01.794152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:43:01.798921 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:43:01.864106 systemd[1]: Switching root. Sep 12 17:43:01.926468 systemd-journald[207]: Journal stopped Sep 12 17:43:03.851204 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 12 17:43:03.852355 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:43:03.852414 kernel: SELinux: policy capability open_perms=1 Sep 12 17:43:03.852439 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:43:03.852457 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:43:03.852474 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:43:03.852492 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:43:03.852514 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:43:03.852532 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:43:03.852549 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:43:03.852567 kernel: audit: type=1403 audit(1757698982.385:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:43:03.852590 systemd[1]: Successfully loaded SELinux policy in 107.243ms. Sep 12 17:43:03.852626 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.689ms. Sep 12 17:43:03.852646 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:43:03.852666 systemd[1]: Detected virtualization amazon. Sep 12 17:43:03.852691 systemd[1]: Detected architecture x86-64. Sep 12 17:43:03.852709 systemd[1]: Detected first boot. Sep 12 17:43:03.852727 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:43:03.852748 zram_generator::config[1440]: No configuration found. Sep 12 17:43:03.852770 kernel: Guest personality initialized and is inactive Sep 12 17:43:03.852788 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:43:03.852804 kernel: Initialized host personality Sep 12 17:43:03.852822 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:43:03.852840 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:43:03.852861 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:43:03.852881 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:43:03.852900 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:43:03.852918 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:43:03.852940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:43:03.852960 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:43:03.852982 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:43:03.853000 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:43:03.853020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:43:03.853039 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:43:03.853057 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:43:03.853075 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:43:03.853098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:03.853117 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:03.853135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:43:03.853153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:43:03.853172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:43:03.853200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:43:03.853221 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:43:03.853240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:03.853262 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:03.853282 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:43:03.853302 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:43:03.854378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:43:03.854405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:43:03.854426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:03.854446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:43:03.854466 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:43:03.854486 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:43:03.854512 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:43:03.854532 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:43:03.854553 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:43:03.854574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:03.854594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:03.854614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:03.854634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:43:03.854654 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:43:03.854674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:43:03.854697 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:43:03.854718 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:03.854738 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:43:03.854758 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:43:03.854779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:43:03.854799 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:43:03.854821 systemd[1]: Reached target machines.target - Containers. Sep 12 17:43:03.854841 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:43:03.854862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:03.854885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:43:03.854906 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:43:03.854927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:43:03.854946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:43:03.854968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:43:03.854990 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:43:03.855011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:43:03.855034 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:43:03.855058 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:43:03.855081 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:43:03.855104 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:43:03.855127 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:43:03.855151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:43:03.855174 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:43:03.855196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:43:03.855222 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:43:03.855245 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:43:03.855266 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:43:03.855286 kernel: fuse: init (API version 7.41) Sep 12 17:43:03.855321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:43:03.857391 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:43:03.857418 systemd[1]: Stopped verity-setup.service. Sep 12 17:43:03.857438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:03.857458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:43:03.857476 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:43:03.857495 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:43:03.857515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:43:03.857537 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:43:03.857556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:43:03.857575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:03.857595 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:43:03.857614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:43:03.857633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:43:03.857650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:43:03.857668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:43:03.857688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:43:03.857717 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:43:03.857741 kernel: loop: module loaded Sep 12 17:43:03.857762 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:43:03.857781 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:43:03.857799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:03.857823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:43:03.857841 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:43:03.857859 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:43:03.857919 systemd-journald[1528]: Collecting audit messages is disabled. Sep 12 17:43:03.857961 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:43:03.857981 kernel: ACPI: bus type drm_connector registered Sep 12 17:43:03.857999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:43:03.858024 systemd-journald[1528]: Journal started Sep 12 17:43:03.858063 systemd-journald[1528]: Runtime Journal (/run/log/journal/ec29608a1bf0a191d0b546e3c716b15d) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:43:03.427755 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:43:03.440066 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:43:03.440566 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:43:03.869325 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:43:03.878006 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:43:03.880346 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:43:03.885339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:43:03.898332 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:43:03.898442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:03.913337 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:43:03.917772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:43:03.923342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:43:03.936394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:43:03.948435 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:43:03.958434 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:43:03.965424 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:43:03.970876 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:43:03.971160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:43:03.977858 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:43:03.978105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:43:03.982840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:03.987009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:43:03.988912 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:43:03.992166 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:43:04.013869 kernel: loop0: detected capacity change from 0 to 111000 Sep 12 17:43:04.042169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:04.054891 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 12 17:43:04.056388 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 12 17:43:04.056584 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:43:04.060529 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:43:04.064546 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:43:04.065334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:43:04.080207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:04.093139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:43:04.092166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:43:04.109392 systemd-journald[1528]: Time spent on flushing to /var/log/journal/ec29608a1bf0a191d0b546e3c716b15d is 83.068ms for 1038 entries. Sep 12 17:43:04.109392 systemd-journald[1528]: System Journal (/var/log/journal/ec29608a1bf0a191d0b546e3c716b15d) is 8M, max 195.6M, 187.6M free. Sep 12 17:43:04.210949 systemd-journald[1528]: Received client request to flush runtime journal. Sep 12 17:43:04.211072 kernel: loop1: detected capacity change from 0 to 128016 Sep 12 17:43:04.211102 kernel: loop2: detected capacity change from 0 to 72360 Sep 12 17:43:04.202374 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:43:04.204713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:43:04.215899 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:43:04.222686 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:43:04.242654 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Sep 12 17:43:04.243059 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Sep 12 17:43:04.252455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:04.329345 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 17:43:04.443697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:43:04.480342 kernel: loop4: detected capacity change from 0 to 111000 Sep 12 17:43:04.514340 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 17:43:04.547784 kernel: loop6: detected capacity change from 0 to 72360 Sep 12 17:43:04.575260 kernel: loop7: detected capacity change from 0 to 224512 Sep 12 17:43:04.612883 (sd-merge)[1601]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:43:04.615812 (sd-merge)[1601]: Merged extensions into '/usr'. Sep 12 17:43:04.635516 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:43:04.635537 systemd[1]: Reloading... Sep 12 17:43:04.776837 zram_generator::config[1624]: No configuration found. Sep 12 17:43:04.949016 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:43:05.118045 systemd[1]: Reloading finished in 481 ms. Sep 12 17:43:05.133287 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:43:05.139355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:43:05.148525 systemd[1]: Starting ensure-sysext.service... Sep 12 17:43:05.151603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:43:05.183164 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:43:05.183192 systemd[1]: Reloading... Sep 12 17:43:05.209959 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:43:05.210502 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:43:05.211005 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:43:05.211436 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:43:05.214891 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:43:05.215947 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 12 17:43:05.216038 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 12 17:43:05.221500 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:43:05.223758 systemd-tmpfiles[1681]: Skipping /boot Sep 12 17:43:05.240600 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:43:05.240619 systemd-tmpfiles[1681]: Skipping /boot Sep 12 17:43:05.308352 zram_generator::config[1708]: No configuration found. Sep 12 17:43:05.507858 systemd[1]: Reloading finished in 323 ms. Sep 12 17:43:05.530623 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:43:05.536990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:05.545470 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:43:05.548139 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:43:05.556740 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:43:05.562516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:43:05.566995 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:05.570474 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:43:05.582806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.583127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:05.585440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:43:05.589903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:43:05.597433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:43:05.598185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:05.598460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:43:05.598625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.605619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:43:05.610878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.611672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:05.612245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:05.613370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:43:05.613511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.624017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.625420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:05.634416 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:43:05.635178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:05.635533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:43:05.635809 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:43:05.637619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:05.655870 systemd[1]: Finished ensure-sysext.service. Sep 12 17:43:05.660602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:43:05.660866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:43:05.662189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:43:05.662834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:43:05.667638 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:43:05.670970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:43:05.676047 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:43:05.681108 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:43:05.694436 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:43:05.695222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:43:05.706209 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:43:05.708412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:43:05.710038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:43:05.729851 augenrules[1800]: No rules Sep 12 17:43:05.731020 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:43:05.732612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:43:05.738477 systemd-udevd[1766]: Using default interface naming scheme 'v255'. Sep 12 17:43:05.741417 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:43:05.763265 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:43:05.764065 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:43:05.766010 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:43:05.787753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:05.796483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:43:05.940215 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:43:05.942771 (udev-worker)[1832]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:06.019994 systemd-resolved[1765]: Positive Trust Anchors: Sep 12 17:43:06.020013 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:43:06.020082 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:43:06.041654 systemd-resolved[1765]: Defaulting to hostname 'linux'. Sep 12 17:43:06.047372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:43:06.048373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:06.050448 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:43:06.051211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:43:06.051892 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:43:06.053201 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:43:06.054631 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:43:06.056519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:43:06.057086 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:43:06.058408 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:43:06.058447 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:43:06.058975 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:43:06.061090 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:43:06.065135 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:43:06.071187 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:43:06.073102 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:43:06.073713 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:43:06.078508 systemd-networkd[1817]: lo: Link UP Sep 12 17:43:06.078520 systemd-networkd[1817]: lo: Gained carrier Sep 12 17:43:06.081590 systemd-networkd[1817]: Enumeration completed Sep 12 17:43:06.083278 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:43:06.085326 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:43:06.086500 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:06.086514 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:43:06.088331 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:43:06.090634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:43:06.095170 systemd[1]: Reached target network.target - Network. Sep 12 17:43:06.097809 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:43:06.099434 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:43:06.100079 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:43:06.100122 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:43:06.101615 systemd-networkd[1817]: eth0: Link UP Sep 12 17:43:06.101818 systemd-networkd[1817]: eth0: Gained carrier Sep 12 17:43:06.101860 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:06.104053 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:43:06.108551 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:43:06.115691 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:43:06.117420 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.16.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:43:06.121391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:43:06.127548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:43:06.134437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:43:06.135042 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:43:06.139611 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:43:06.148382 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:43:06.155638 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:43:06.159644 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:43:06.175929 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:43:06.207584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:43:06.226996 extend-filesystems[1862]: Found /dev/nvme0n1p6 Sep 12 17:43:06.230123 jq[1861]: false Sep 12 17:43:06.233054 extend-filesystems[1862]: Found /dev/nvme0n1p9 Sep 12 17:43:06.241089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:43:06.252606 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:43:06.258378 extend-filesystems[1862]: Checking size of /dev/nvme0n1p9 Sep 12 17:43:06.262455 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Refreshing passwd entry cache Sep 12 17:43:06.264864 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:43:06.276576 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:43:06.280383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:43:06.281142 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:43:06.281745 oslogin_cache_refresh[1863]: Refreshing passwd entry cache Sep 12 17:43:06.286621 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:43:06.292527 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:43:06.304005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:43:06.304336 extend-filesystems[1862]: Resized partition /dev/nvme0n1p9 Sep 12 17:43:06.306793 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:43:06.307093 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:43:06.311487 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 17:43:06.313580 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Failure getting users, quitting Sep 12 17:43:06.313580 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:43:06.313580 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Refreshing group entry cache Sep 12 17:43:06.313003 oslogin_cache_refresh[1863]: Failure getting users, quitting Sep 12 17:43:06.313027 oslogin_cache_refresh[1863]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:43:06.313090 oslogin_cache_refresh[1863]: Refreshing group entry cache Sep 12 17:43:06.316360 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Failure getting groups, quitting Sep 12 17:43:06.319339 oslogin_cache_refresh[1863]: Failure getting groups, quitting Sep 12 17:43:06.320667 google_oslogin_nss_cache[1863]: oslogin_cache_refresh[1863]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:43:06.319374 oslogin_cache_refresh[1863]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:43:06.328417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:43:06.329412 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:43:06.339592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:43:06.339928 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:43:06.343343 extend-filesystems[1919]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:43:06.360268 coreos-metadata[1856]: Sep 12 17:43:06.360 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:43:06.361791 coreos-metadata[1856]: Sep 12 17:43:06.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:43:06.363356 coreos-metadata[1856]: Sep 12 17:43:06.363 INFO Fetch successful Sep 12 17:43:06.363509 coreos-metadata[1856]: Sep 12 17:43:06.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:43:06.365548 coreos-metadata[1856]: Sep 12 17:43:06.365 INFO Fetch successful Sep 12 17:43:06.365548 coreos-metadata[1856]: Sep 12 17:43:06.365 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:43:06.366476 coreos-metadata[1856]: Sep 12 17:43:06.366 INFO Fetch successful Sep 12 17:43:06.366476 coreos-metadata[1856]: Sep 12 17:43:06.366 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:43:06.367264 coreos-metadata[1856]: Sep 12 17:43:06.367 INFO Fetch successful Sep 12 17:43:06.367264 coreos-metadata[1856]: Sep 12 17:43:06.367 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:43:06.367937 coreos-metadata[1856]: Sep 12 17:43:06.367 INFO Fetch failed with 404: resource not found Sep 12 17:43:06.368120 coreos-metadata[1856]: Sep 12 17:43:06.368 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:43:06.370619 coreos-metadata[1856]: Sep 12 17:43:06.369 INFO Fetch successful Sep 12 17:43:06.370619 coreos-metadata[1856]: Sep 12 17:43:06.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:43:06.371450 coreos-metadata[1856]: Sep 12 17:43:06.371 INFO Fetch successful Sep 12 17:43:06.371450 coreos-metadata[1856]: Sep 12 17:43:06.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:43:06.372103 coreos-metadata[1856]: Sep 12 17:43:06.372 INFO Fetch successful Sep 12 17:43:06.372380 coreos-metadata[1856]: Sep 12 17:43:06.372 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:43:06.373320 coreos-metadata[1856]: Sep 12 17:43:06.373 INFO Fetch successful Sep 12 17:43:06.373320 coreos-metadata[1856]: Sep 12 17:43:06.373 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:43:06.378343 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:43:06.378443 coreos-metadata[1856]: Sep 12 17:43:06.375 INFO Fetch successful Sep 12 17:43:06.405592 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:43:06.405664 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 12 17:43:06.417548 update_engine[1910]: I20250912 17:43:06.417450 1910 main.cc:92] Flatcar Update Engine starting Sep 12 17:43:06.429981 jq[1913]: true Sep 12 17:43:06.431029 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:43:06.431559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:43:06.443644 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:43:06.464118 dbus-daemon[1857]: [system] SELinux support is enabled Sep 12 17:43:06.464332 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:43:06.467855 (ntainerd)[1948]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:43:06.492028 tar[1920]: linux-amd64/LICENSE Sep 12 17:43:06.492028 tar[1920]: linux-amd64/helm Sep 12 17:43:06.470243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:43:06.492587 update_engine[1910]: I20250912 17:43:06.489992 1910 update_check_scheduler.cc:74] Next update check in 7m38s Sep 12 17:43:06.489647 dbus-daemon[1857]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:43:06.470280 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:43:06.472451 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:43:06.472475 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:43:06.488987 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:43:06.498483 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:43:06.500474 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:43:06.517346 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:43:06.532159 extend-filesystems[1919]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:43:06.532159 extend-filesystems[1919]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:43:06.532159 extend-filesystems[1919]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:43:06.554680 extend-filesystems[1862]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:43:06.559416 jq[1953]: true Sep 12 17:43:06.567984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:43:06.569624 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:43:06.570375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:43:06.592385 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:43:06.607344 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:43:06.611586 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:43:06.614697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:43:06.634338 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:43:06.747425 bash[2018]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:43:06.748757 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:43:06.754034 systemd[1]: Starting sshkeys.service... Sep 12 17:43:06.812894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:06.821430 ntpd[1870]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:43:06.821474 ntpd[1870]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: ---------------------------------------------------- Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: corporation. Support and training for ntp-4 are Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: available at https://www.nwtime.org/support Sep 12 17:43:06.821860 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: ---------------------------------------------------- Sep 12 17:43:06.821483 ntpd[1870]: ---------------------------------------------------- Sep 12 17:43:06.821492 ntpd[1870]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:43:06.821501 ntpd[1870]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:43:06.821509 ntpd[1870]: corporation. Support and training for ntp-4 are Sep 12 17:43:06.821518 ntpd[1870]: available at https://www.nwtime.org/support Sep 12 17:43:06.821529 ntpd[1870]: ---------------------------------------------------- Sep 12 17:43:06.828799 ntpd[1870]: proto: precision = 0.083 usec (-23) Sep 12 17:43:06.837484 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: proto: precision = 0.083 usec (-23) Sep 12 17:43:06.846835 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:43:06.848994 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: basedate set to 2025-08-31 Sep 12 17:43:06.848994 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: gps base set to 2025-08-31 (week 2382) Sep 12 17:43:06.847499 ntpd[1870]: basedate set to 2025-08-31 Sep 12 17:43:06.847525 ntpd[1870]: gps base set to 2025-08-31 (week 2382) Sep 12 17:43:06.849608 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:43:06.859873 ntpd[1870]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:43:06.859945 ntpd[1870]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:43:06.860060 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:43:06.860060 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:43:06.860153 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:43:06.860137 ntpd[1870]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:43:06.860248 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listen normally on 3 eth0 172.31.16.83:123 Sep 12 17:43:06.860248 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listen normally on 4 lo [::1]:123 Sep 12 17:43:06.860183 ntpd[1870]: Listen normally on 3 eth0 172.31.16.83:123 Sep 12 17:43:06.860378 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: bind(21) AF_INET6 fe80::475:9bff:fe50:277%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:43:06.860378 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: unable to create socket on eth0 (5) for fe80::475:9bff:fe50:277%2#123 Sep 12 17:43:06.860378 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: failed to init interface for address fe80::475:9bff:fe50:277%2 Sep 12 17:43:06.860378 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: Listening on routing socket on fd #21 for interface updates Sep 12 17:43:06.860225 ntpd[1870]: Listen normally on 4 lo [::1]:123 Sep 12 17:43:06.860277 ntpd[1870]: bind(21) AF_INET6 fe80::475:9bff:fe50:277%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:43:06.860299 ntpd[1870]: unable to create socket on eth0 (5) for fe80::475:9bff:fe50:277%2#123 Sep 12 17:43:06.860330 ntpd[1870]: failed to init interface for address fe80::475:9bff:fe50:277%2 Sep 12 17:43:06.860366 ntpd[1870]: Listening on routing socket on fd #21 for interface updates Sep 12 17:43:06.888759 ntpd[1870]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:06.889470 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:06.889470 ntpd[1870]: 12 Sep 17:43:06 ntpd[1870]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:06.888802 ntpd[1870]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:06.957049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:43:06.961847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:06.962181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:06.970550 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:43:06.977820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:07.042066 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:43:07.048432 coreos-metadata[2026]: Sep 12 17:43:07.048 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:43:07.053447 coreos-metadata[2026]: Sep 12 17:43:07.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:43:07.062087 coreos-metadata[2026]: Sep 12 17:43:07.056 INFO Fetch successful Sep 12 17:43:07.062087 coreos-metadata[2026]: Sep 12 17:43:07.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:43:07.062087 coreos-metadata[2026]: Sep 12 17:43:07.057 INFO Fetch successful Sep 12 17:43:07.059244 unknown[2026]: wrote ssh authorized keys file for user: core Sep 12 17:43:07.126389 update-ssh-keys[2053]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:43:07.128752 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:43:07.136889 systemd[1]: Finished sshkeys.service. Sep 12 17:43:07.213242 locksmithd[1965]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:43:07.215216 containerd[1948]: time="2025-09-12T17:43:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:43:07.231609 containerd[1948]: time="2025-09-12T17:43:07.231552695Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:43:07.292648 systemd-networkd[1817]: eth0: Gained IPv6LL Sep 12 17:43:07.302830 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:43:07.303724 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:43:07.307672 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:43:07.315442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320549044Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.984µs" Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320600062Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320626080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320807942Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320826500Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320859583Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320921358Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:43:07.321044 containerd[1948]: time="2025-09-12T17:43:07.320935335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:43:07.321615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:43:07.345560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:07.347505 containerd[1948]: time="2025-09-12T17:43:07.347458920Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.347643638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.347689199Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.347771753Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.347955845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.348214007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.348257855Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:43:07.348340 containerd[1948]: time="2025-09-12T17:43:07.348274994Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:43:07.348683 containerd[1948]: time="2025-09-12T17:43:07.348660696Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:43:07.349255 containerd[1948]: time="2025-09-12T17:43:07.349232276Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:43:07.349525 containerd[1948]: time="2025-09-12T17:43:07.349505902Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:43:07.355007 containerd[1948]: time="2025-09-12T17:43:07.354961396Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:43:07.355192 containerd[1948]: time="2025-09-12T17:43:07.355174241Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:43:07.355400 containerd[1948]: time="2025-09-12T17:43:07.355301937Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:43:07.355508 containerd[1948]: time="2025-09-12T17:43:07.355491368Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:43:07.355577 containerd[1948]: time="2025-09-12T17:43:07.355562862Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:43:07.355644 containerd[1948]: time="2025-09-12T17:43:07.355629828Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:43:07.355720 containerd[1948]: time="2025-09-12T17:43:07.355705852Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:43:07.355794 containerd[1948]: time="2025-09-12T17:43:07.355780916Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:43:07.355892 containerd[1948]: time="2025-09-12T17:43:07.355876287Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356360118Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356391127Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356411621Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356569886Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356607167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356635984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356680269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356697669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356712830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356729177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356745270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356763227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356778793Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356793740Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:43:07.356938 containerd[1948]: time="2025-09-12T17:43:07.356888593Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:43:07.357516 containerd[1948]: time="2025-09-12T17:43:07.356908986Z" level=info msg="Start snapshots syncer" Sep 12 17:43:07.358172 containerd[1948]: time="2025-09-12T17:43:07.357587946Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:43:07.358172 containerd[1948]: time="2025-09-12T17:43:07.358038778Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:43:07.358414 containerd[1948]: time="2025-09-12T17:43:07.358100145Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:43:07.358586 containerd[1948]: time="2025-09-12T17:43:07.358563967Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:43:07.358864 containerd[1948]: time="2025-09-12T17:43:07.358841797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:43:07.359009 containerd[1948]: time="2025-09-12T17:43:07.358991588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359406599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359434837Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359453956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359475963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359493732Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359527327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359544080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359563218Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359628965Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359651614Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359711231Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359727429Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359739742Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:43:07.359931 containerd[1948]: time="2025-09-12T17:43:07.359753642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359768773Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359791341Z" level=info msg="runtime interface created" Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359799187Z" level=info msg="created NRI interface" Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359812831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359844077Z" level=info msg="Connect containerd service" Sep 12 17:43:07.360839 containerd[1948]: time="2025-09-12T17:43:07.359886466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:43:07.365193 containerd[1948]: time="2025-09-12T17:43:07.363787865Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:43:07.601980 amazon-ssm-agent[2072]: Initializing new seelog logger Sep 12 17:43:07.604481 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:43:07.609602 amazon-ssm-agent[2072]: New Seelog Logger Creation Complete Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 processing appconfig overrides Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.616095 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 processing appconfig overrides Sep 12 17:43:07.624571 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6112 INFO Proxy environment variables: Sep 12 17:43:07.630513 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.631809 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.631809 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 processing appconfig overrides Sep 12 17:43:07.653082 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.653082 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:07.653250 amazon-ssm-agent[2072]: 2025/09/12 17:43:07 processing appconfig overrides Sep 12 17:43:07.731184 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6112 INFO http_proxy: Sep 12 17:43:07.741145 systemd-logind[1901]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:43:07.741532 systemd-logind[1901]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:43:07.741564 systemd-logind[1901]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:43:07.741965 systemd-logind[1901]: New seat seat0. Sep 12 17:43:07.742919 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:43:07.752430 containerd[1948]: time="2025-09-12T17:43:07.752382764Z" level=info msg="Start subscribing containerd event" Sep 12 17:43:07.752534 containerd[1948]: time="2025-09-12T17:43:07.752456042Z" level=info msg="Start recovering state" Sep 12 17:43:07.752593 containerd[1948]: time="2025-09-12T17:43:07.752566554Z" level=info msg="Start event monitor" Sep 12 17:43:07.752593 containerd[1948]: time="2025-09-12T17:43:07.752581560Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:43:07.752664 containerd[1948]: time="2025-09-12T17:43:07.752593818Z" level=info msg="Start streaming server" Sep 12 17:43:07.752664 containerd[1948]: time="2025-09-12T17:43:07.752606466Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:43:07.752664 containerd[1948]: time="2025-09-12T17:43:07.752616473Z" level=info msg="runtime interface starting up..." Sep 12 17:43:07.752664 containerd[1948]: time="2025-09-12T17:43:07.752624765Z" level=info msg="starting plugins..." Sep 12 17:43:07.752664 containerd[1948]: time="2025-09-12T17:43:07.752640701Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:43:07.774873 containerd[1948]: time="2025-09-12T17:43:07.774301023Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:43:07.780872 containerd[1948]: time="2025-09-12T17:43:07.780718155Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:43:07.782070 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:43:07.784162 containerd[1948]: time="2025-09-12T17:43:07.782815963Z" level=info msg="containerd successfully booted in 0.568084s" Sep 12 17:43:07.833513 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6112 INFO no_proxy: Sep 12 17:43:07.947086 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6112 INFO https_proxy: Sep 12 17:43:07.945034 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:43:08.054329 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6113 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:43:08.082566 sshd_keygen[1952]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:43:08.108702 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:43:08.111524 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:43:08.112920 dbus-daemon[1857]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1964 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:43:08.124664 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:43:08.157055 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.6115 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:43:08.215098 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:43:08.220111 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:43:08.224588 systemd[1]: Started sshd@0-172.31.16.83:22-139.178.68.195:60796.service - OpenSSH per-connection server daemon (139.178.68.195:60796). Sep 12 17:43:08.257132 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9676 INFO Agent will take identity from EC2 Sep 12 17:43:08.289347 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:43:08.289663 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:43:08.294802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:43:08.334416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:43:08.342138 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:43:08.349777 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:43:08.352338 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:43:08.365347 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9865 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 17:43:08.392208 polkitd[2192]: Started polkitd version 126 Sep 12 17:43:08.412210 polkitd[2192]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:43:08.412908 polkitd[2192]: Loading rules from directory /run/polkit-1/rules.d Sep 12 17:43:08.412968 polkitd[2192]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:43:08.413513 polkitd[2192]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 17:43:08.415424 polkitd[2192]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:43:08.415485 polkitd[2192]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:43:08.417362 polkitd[2192]: Finished loading, compiling and executing 2 rules Sep 12 17:43:08.417711 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:43:08.421734 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:43:08.422603 polkitd[2192]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:43:08.441326 tar[1920]: linux-amd64/README.md Sep 12 17:43:08.452591 systemd-resolved[1765]: System hostname changed to 'ip-172-31-16-83'. Sep 12 17:43:08.452934 systemd-hostnamed[1964]: Hostname set to (transient) Sep 12 17:43:08.462832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:43:08.465602 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9865 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:43:08.538441 sshd[2199]: Accepted publickey for core from 139.178.68.195 port 60796 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:08.542139 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:08.552565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:43:08.554537 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:43:08.564809 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9865 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:43:08.568136 systemd-logind[1901]: New session 1 of user core. Sep 12 17:43:08.571296 amazon-ssm-agent[2072]: 2025/09/12 17:43:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:08.571296 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:08.571296 amazon-ssm-agent[2072]: 2025/09/12 17:43:08 processing appconfig overrides Sep 12 17:43:08.585659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:43:08.590748 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:43:08.605852 (systemd)[2222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:43:08.608616 systemd-logind[1901]: New session c1 of user core. Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9865 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9865 INFO [Registrar] Starting registrar module Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9916 INFO [EC2Identity] Checking disk for registration info Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9917 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:07.9917 INFO [EC2Identity] Generating registration keypair Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5259 INFO [EC2Identity] Checking write access before registering Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5262 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5687 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5687 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5688 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.5688 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.6136 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:43:08.613985 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.6138 INFO [CredentialRefresher] Credentials ready Sep 12 17:43:08.664230 amazon-ssm-agent[2072]: 2025-09-12 17:43:08.6139 INFO [CredentialRefresher] Next credential rotation will be in 29.99999542455 minutes Sep 12 17:43:08.779350 systemd[2222]: Queued start job for default target default.target. Sep 12 17:43:08.796063 systemd[2222]: Created slice app.slice - User Application Slice. Sep 12 17:43:08.796116 systemd[2222]: Reached target paths.target - Paths. Sep 12 17:43:08.796472 systemd[2222]: Reached target timers.target - Timers. Sep 12 17:43:08.798247 systemd[2222]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:43:08.825627 systemd[2222]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:43:08.825948 systemd[2222]: Reached target sockets.target - Sockets. Sep 12 17:43:08.826099 systemd[2222]: Reached target basic.target - Basic System. Sep 12 17:43:08.826157 systemd[2222]: Reached target default.target - Main User Target. Sep 12 17:43:08.826198 systemd[2222]: Startup finished in 210ms. Sep 12 17:43:08.826293 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:43:08.835555 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:43:08.984221 systemd[1]: Started sshd@1-172.31.16.83:22-139.178.68.195:51426.service - OpenSSH per-connection server daemon (139.178.68.195:51426). Sep 12 17:43:09.153845 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 51426 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:09.155342 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:09.166284 systemd-logind[1901]: New session 2 of user core. Sep 12 17:43:09.183604 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:43:09.309340 sshd[2236]: Connection closed by 139.178.68.195 port 51426 Sep 12 17:43:09.308218 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:09.322871 systemd[1]: sshd@1-172.31.16.83:22-139.178.68.195:51426.service: Deactivated successfully. Sep 12 17:43:09.328017 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:43:09.330115 systemd-logind[1901]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:43:09.344401 systemd[1]: Started sshd@2-172.31.16.83:22-139.178.68.195:51428.service - OpenSSH per-connection server daemon (139.178.68.195:51428). Sep 12 17:43:09.348236 systemd-logind[1901]: Removed session 2. Sep 12 17:43:09.535659 sshd[2242]: Accepted publickey for core from 139.178.68.195 port 51428 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:09.535905 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:09.541888 systemd-logind[1901]: New session 3 of user core. Sep 12 17:43:09.547551 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:43:09.628042 amazon-ssm-agent[2072]: 2025-09-12 17:43:09.6277 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:43:09.671510 sshd[2245]: Connection closed by 139.178.68.195 port 51428 Sep 12 17:43:09.674691 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:09.680753 systemd[1]: sshd@2-172.31.16.83:22-139.178.68.195:51428.service: Deactivated successfully. Sep 12 17:43:09.681634 systemd-logind[1901]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:43:09.686172 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:43:09.695220 systemd-logind[1901]: Removed session 3. Sep 12 17:43:09.728903 amazon-ssm-agent[2072]: 2025-09-12 17:43:09.6417 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2248) started Sep 12 17:43:09.821970 ntpd[1870]: Listen normally on 6 eth0 [fe80::475:9bff:fe50:277%2]:123 Sep 12 17:43:09.825557 ntpd[1870]: 12 Sep 17:43:09 ntpd[1870]: Listen normally on 6 eth0 [fe80::475:9bff:fe50:277%2]:123 Sep 12 17:43:09.833351 amazon-ssm-agent[2072]: 2025-09-12 17:43:09.6417 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:43:10.411745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:10.413748 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:43:10.415029 systemd[1]: Startup finished in 2.774s (kernel) + 8.659s (initrd) + 8.134s (userspace) = 19.568s. Sep 12 17:43:10.426952 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:43:11.525684 kubelet[2269]: E0912 17:43:11.525591 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:43:11.528201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:43:11.528366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:43:11.528691 systemd[1]: kubelet.service: Consumed 1.088s CPU time, 265.9M memory peak. Sep 12 17:43:14.995382 systemd-resolved[1765]: Clock change detected. Flushing caches. Sep 12 17:43:20.877833 systemd[1]: Started sshd@3-172.31.16.83:22-139.178.68.195:58654.service - OpenSSH per-connection server daemon (139.178.68.195:58654). Sep 12 17:43:21.049224 sshd[2282]: Accepted publickey for core from 139.178.68.195 port 58654 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:21.050764 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.057479 systemd-logind[1901]: New session 4 of user core. Sep 12 17:43:21.070656 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:43:21.188314 sshd[2285]: Connection closed by 139.178.68.195 port 58654 Sep 12 17:43:21.189126 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.193195 systemd[1]: sshd@3-172.31.16.83:22-139.178.68.195:58654.service: Deactivated successfully. Sep 12 17:43:21.195063 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:43:21.195933 systemd-logind[1901]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:43:21.197260 systemd-logind[1901]: Removed session 4. Sep 12 17:43:21.219297 systemd[1]: Started sshd@4-172.31.16.83:22-139.178.68.195:58656.service - OpenSSH per-connection server daemon (139.178.68.195:58656). Sep 12 17:43:21.391835 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 58656 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:21.393387 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.399485 systemd-logind[1901]: New session 5 of user core. Sep 12 17:43:21.408653 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:43:21.523570 sshd[2294]: Connection closed by 139.178.68.195 port 58656 Sep 12 17:43:21.524472 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.528612 systemd[1]: sshd@4-172.31.16.83:22-139.178.68.195:58656.service: Deactivated successfully. Sep 12 17:43:21.530597 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:43:21.531663 systemd-logind[1901]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:43:21.533679 systemd-logind[1901]: Removed session 5. Sep 12 17:43:21.555146 systemd[1]: Started sshd@5-172.31.16.83:22-139.178.68.195:58664.service - OpenSSH per-connection server daemon (139.178.68.195:58664). Sep 12 17:43:21.730187 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 58664 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:21.731504 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.736203 systemd-logind[1901]: New session 6 of user core. Sep 12 17:43:21.742650 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:43:21.862266 sshd[2303]: Connection closed by 139.178.68.195 port 58664 Sep 12 17:43:21.862612 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.866303 systemd[1]: sshd@5-172.31.16.83:22-139.178.68.195:58664.service: Deactivated successfully. Sep 12 17:43:21.868002 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:43:21.870110 systemd-logind[1901]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:43:21.871174 systemd-logind[1901]: Removed session 6. Sep 12 17:43:21.898013 systemd[1]: Started sshd@6-172.31.16.83:22-139.178.68.195:58666.service - OpenSSH per-connection server daemon (139.178.68.195:58666). Sep 12 17:43:22.070098 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 58666 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:22.071435 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:22.077581 systemd-logind[1901]: New session 7 of user core. Sep 12 17:43:22.086689 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:43:22.197880 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:43:22.198263 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:22.210950 sudo[2313]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:22.233683 sshd[2312]: Connection closed by 139.178.68.195 port 58666 Sep 12 17:43:22.234394 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:22.238185 systemd[1]: sshd@6-172.31.16.83:22-139.178.68.195:58666.service: Deactivated successfully. Sep 12 17:43:22.239865 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:43:22.242058 systemd-logind[1901]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:43:22.243362 systemd-logind[1901]: Removed session 7. Sep 12 17:43:22.268384 systemd[1]: Started sshd@7-172.31.16.83:22-139.178.68.195:58670.service - OpenSSH per-connection server daemon (139.178.68.195:58670). Sep 12 17:43:22.439452 sshd[2319]: Accepted publickey for core from 139.178.68.195 port 58670 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:22.441028 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:22.447382 systemd-logind[1901]: New session 8 of user core. Sep 12 17:43:22.456657 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:43:22.553955 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:43:22.554333 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:22.559641 sudo[2324]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:22.565222 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:43:22.565638 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:22.576454 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:43:22.619128 augenrules[2346]: No rules Sep 12 17:43:22.620604 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:43:22.621005 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:43:22.622100 sudo[2323]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:22.645855 sshd[2322]: Connection closed by 139.178.68.195 port 58670 Sep 12 17:43:22.646392 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:22.649781 systemd[1]: sshd@7-172.31.16.83:22-139.178.68.195:58670.service: Deactivated successfully. Sep 12 17:43:22.651468 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:43:22.653566 systemd-logind[1901]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:43:22.654691 systemd-logind[1901]: Removed session 8. Sep 12 17:43:22.691239 systemd[1]: Started sshd@8-172.31.16.83:22-139.178.68.195:58678.service - OpenSSH per-connection server daemon (139.178.68.195:58678). Sep 12 17:43:22.703186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:43:22.707614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:22.857788 sshd[2355]: Accepted publickey for core from 139.178.68.195 port 58678 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:43:22.859468 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:22.865095 systemd-logind[1901]: New session 9 of user core. Sep 12 17:43:22.871845 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:43:22.930242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:22.941076 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:43:22.969219 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:43:22.970054 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:23.005680 kubelet[2367]: E0912 17:43:23.005623 2367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:43:23.010666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:43:23.010845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:43:23.011535 systemd[1]: kubelet.service: Consumed 186ms CPU time, 108.6M memory peak. Sep 12 17:43:23.369403 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:43:23.384018 (dockerd)[2392]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:43:23.678516 dockerd[2392]: time="2025-09-12T17:43:23.677927629Z" level=info msg="Starting up" Sep 12 17:43:23.680053 dockerd[2392]: time="2025-09-12T17:43:23.680013177Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:43:23.692737 dockerd[2392]: time="2025-09-12T17:43:23.692684046Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:43:23.709881 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3543886407-merged.mount: Deactivated successfully. Sep 12 17:43:23.744311 dockerd[2392]: time="2025-09-12T17:43:23.744266890Z" level=info msg="Loading containers: start." Sep 12 17:43:23.755448 kernel: Initializing XFRM netlink socket Sep 12 17:43:23.974710 (udev-worker)[2413]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:24.030021 systemd-networkd[1817]: docker0: Link UP Sep 12 17:43:24.036314 dockerd[2392]: time="2025-09-12T17:43:24.036244451Z" level=info msg="Loading containers: done." Sep 12 17:43:24.053136 dockerd[2392]: time="2025-09-12T17:43:24.053083036Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:43:24.053306 dockerd[2392]: time="2025-09-12T17:43:24.053181598Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:43:24.053306 dockerd[2392]: time="2025-09-12T17:43:24.053269561Z" level=info msg="Initializing buildkit" Sep 12 17:43:24.078770 dockerd[2392]: time="2025-09-12T17:43:24.078717931Z" level=info msg="Completed buildkit initialization" Sep 12 17:43:24.086392 dockerd[2392]: time="2025-09-12T17:43:24.086336833Z" level=info msg="Daemon has completed initialization" Sep 12 17:43:24.086574 dockerd[2392]: time="2025-09-12T17:43:24.086534370Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:43:24.086707 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:43:25.212542 containerd[1948]: time="2025-09-12T17:43:25.212504411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:43:25.761312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196781407.mount: Deactivated successfully. Sep 12 17:43:27.128692 containerd[1948]: time="2025-09-12T17:43:27.128640705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:27.132445 containerd[1948]: time="2025-09-12T17:43:27.131147177Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:43:27.132445 containerd[1948]: time="2025-09-12T17:43:27.131971147Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:27.138671 containerd[1948]: time="2025-09-12T17:43:27.138619606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:27.139754 containerd[1948]: time="2025-09-12T17:43:27.139715381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.927171723s" Sep 12 17:43:27.139906 containerd[1948]: time="2025-09-12T17:43:27.139886229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:43:27.140904 containerd[1948]: time="2025-09-12T17:43:27.140865395Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:43:28.965641 containerd[1948]: time="2025-09-12T17:43:28.965588012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.966900 containerd[1948]: time="2025-09-12T17:43:28.966711479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:43:28.968029 containerd[1948]: time="2025-09-12T17:43:28.967994333Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.971077 containerd[1948]: time="2025-09-12T17:43:28.971037426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.972123 containerd[1948]: time="2025-09-12T17:43:28.971985177Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.831081324s" Sep 12 17:43:28.972123 containerd[1948]: time="2025-09-12T17:43:28.972022864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:43:28.973170 containerd[1948]: time="2025-09-12T17:43:28.973140881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:43:30.275590 containerd[1948]: time="2025-09-12T17:43:30.275530440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:30.277361 containerd[1948]: time="2025-09-12T17:43:30.277236168Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:43:30.282102 containerd[1948]: time="2025-09-12T17:43:30.278960183Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:30.293446 containerd[1948]: time="2025-09-12T17:43:30.291925959Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.318749559s" Sep 12 17:43:30.293446 containerd[1948]: time="2025-09-12T17:43:30.291978441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:43:30.293446 containerd[1948]: time="2025-09-12T17:43:30.292404156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:30.293672 containerd[1948]: time="2025-09-12T17:43:30.293553706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:43:31.333277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560428150.mount: Deactivated successfully. Sep 12 17:43:31.916433 containerd[1948]: time="2025-09-12T17:43:31.916381892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.918719 containerd[1948]: time="2025-09-12T17:43:31.918569039Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:43:31.921110 containerd[1948]: time="2025-09-12T17:43:31.921070178Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.925193 containerd[1948]: time="2025-09-12T17:43:31.924295616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.925193 containerd[1948]: time="2025-09-12T17:43:31.925046059Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.631461248s" Sep 12 17:43:31.925193 containerd[1948]: time="2025-09-12T17:43:31.925082201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:43:31.925811 containerd[1948]: time="2025-09-12T17:43:31.925771179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:43:32.527938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048435858.mount: Deactivated successfully. Sep 12 17:43:33.261996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:43:33.265326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:33.559591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:33.568258 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:43:33.651272 kubelet[2735]: E0912 17:43:33.651193 2735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:43:33.655047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:43:33.655241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:43:33.655682 systemd[1]: kubelet.service: Consumed 210ms CPU time, 106.9M memory peak. Sep 12 17:43:33.725599 containerd[1948]: time="2025-09-12T17:43:33.725520751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.727649 containerd[1948]: time="2025-09-12T17:43:33.727603022Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:43:33.730032 containerd[1948]: time="2025-09-12T17:43:33.729964315Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.734446 containerd[1948]: time="2025-09-12T17:43:33.734253151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.735179 containerd[1948]: time="2025-09-12T17:43:33.735144754Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.809204495s" Sep 12 17:43:33.735179 containerd[1948]: time="2025-09-12T17:43:33.735181755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:43:33.737751 containerd[1948]: time="2025-09-12T17:43:33.737723163Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:43:34.227825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238742955.mount: Deactivated successfully. Sep 12 17:43:34.241401 containerd[1948]: time="2025-09-12T17:43:34.241336276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:34.243216 containerd[1948]: time="2025-09-12T17:43:34.243171813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:43:34.245668 containerd[1948]: time="2025-09-12T17:43:34.245598615Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:34.249072 containerd[1948]: time="2025-09-12T17:43:34.249015107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:34.249602 containerd[1948]: time="2025-09-12T17:43:34.249459936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 511.69322ms" Sep 12 17:43:34.249602 containerd[1948]: time="2025-09-12T17:43:34.249493148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:43:34.250351 containerd[1948]: time="2025-09-12T17:43:34.250300107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:43:34.842155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660690062.mount: Deactivated successfully. Sep 12 17:43:37.043918 containerd[1948]: time="2025-09-12T17:43:37.043856436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.044956 containerd[1948]: time="2025-09-12T17:43:37.044921173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:43:37.046459 containerd[1948]: time="2025-09-12T17:43:37.046392221Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.050361 containerd[1948]: time="2025-09-12T17:43:37.050303297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.051965 containerd[1948]: time="2025-09-12T17:43:37.051424643Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.801080042s" Sep 12 17:43:37.051965 containerd[1948]: time="2025-09-12T17:43:37.051462230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:43:39.635924 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:43:39.919518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:39.919779 systemd[1]: kubelet.service: Consumed 210ms CPU time, 106.9M memory peak. Sep 12 17:43:39.922524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:39.957385 systemd[1]: Reload requested from client PID 2830 ('systemctl') (unit session-9.scope)... Sep 12 17:43:39.957405 systemd[1]: Reloading... Sep 12 17:43:40.119469 zram_generator::config[2877]: No configuration found. Sep 12 17:43:40.414018 systemd[1]: Reloading finished in 456 ms. Sep 12 17:43:40.475975 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:43:40.476083 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:43:40.476457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:40.476518 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.2M memory peak. Sep 12 17:43:40.479177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:40.745030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:40.758959 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:43:40.863442 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:40.863442 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:43:40.863442 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:40.863442 kubelet[2938]: I0912 17:43:40.863268 2938 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:43:41.398202 kubelet[2938]: I0912 17:43:41.398138 2938 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:43:41.398202 kubelet[2938]: I0912 17:43:41.398187 2938 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:43:41.398538 kubelet[2938]: I0912 17:43:41.398499 2938 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:43:41.454764 kubelet[2938]: I0912 17:43:41.454643 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:43:41.455604 kubelet[2938]: E0912 17:43:41.455517 2938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:41.470372 kubelet[2938]: I0912 17:43:41.470339 2938 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:43:41.476890 kubelet[2938]: I0912 17:43:41.476842 2938 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:43:41.480958 kubelet[2938]: I0912 17:43:41.480888 2938 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:43:41.481185 kubelet[2938]: I0912 17:43:41.480955 2938 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:43:41.483285 kubelet[2938]: I0912 17:43:41.483249 2938 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:43:41.483285 kubelet[2938]: I0912 17:43:41.483284 2938 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:43:41.484953 kubelet[2938]: I0912 17:43:41.484911 2938 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:41.492874 kubelet[2938]: I0912 17:43:41.492811 2938 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:43:41.492874 kubelet[2938]: I0912 17:43:41.492874 2938 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:43:41.494257 kubelet[2938]: I0912 17:43:41.493792 2938 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:43:41.494257 kubelet[2938]: I0912 17:43:41.493820 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:43:41.497485 kubelet[2938]: W0912 17:43:41.496222 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-83&limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:41.497485 kubelet[2938]: E0912 17:43:41.496301 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-83&limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:41.498039 kubelet[2938]: W0912 17:43:41.497992 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:41.498476 kubelet[2938]: E0912 17:43:41.498443 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:41.500444 kubelet[2938]: I0912 17:43:41.500179 2938 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:43:41.504518 kubelet[2938]: I0912 17:43:41.504475 2938 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:43:41.506390 kubelet[2938]: W0912 17:43:41.506347 2938 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:43:41.510323 kubelet[2938]: I0912 17:43:41.510298 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:43:41.510741 kubelet[2938]: I0912 17:43:41.510510 2938 server.go:1287] "Started kubelet" Sep 12 17:43:41.511949 kubelet[2938]: I0912 17:43:41.511914 2938 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:43:41.521053 kubelet[2938]: I0912 17:43:41.520469 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:43:41.521053 kubelet[2938]: I0912 17:43:41.520981 2938 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:43:41.522204 kubelet[2938]: I0912 17:43:41.521983 2938 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:43:41.528373 kubelet[2938]: I0912 17:43:41.528343 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:43:41.533549 kubelet[2938]: E0912 17:43:41.528466 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.83:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-83.186499f2872f13ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-83,UID:ip-172-31-16-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-83,},FirstTimestamp:2025-09-12 17:43:41.510480895 +0000 UTC m=+0.747461265,LastTimestamp:2025-09-12 17:43:41.510480895 +0000 UTC m=+0.747461265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-83,}" Sep 12 17:43:41.534519 kubelet[2938]: I0912 17:43:41.534224 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:43:41.538957 kubelet[2938]: E0912 17:43:41.537148 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-83\" not found" Sep 12 17:43:41.538957 kubelet[2938]: I0912 17:43:41.537185 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:43:41.538957 kubelet[2938]: I0912 17:43:41.537394 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:43:41.538957 kubelet[2938]: I0912 17:43:41.537461 2938 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:43:41.538957 kubelet[2938]: W0912 17:43:41.537799 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:41.538957 kubelet[2938]: E0912 17:43:41.537840 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:41.542618 kubelet[2938]: E0912 17:43:41.542491 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-83?timeout=10s\": dial tcp 172.31.16.83:6443: connect: connection refused" interval="200ms" Sep 12 17:43:41.543130 kubelet[2938]: I0912 17:43:41.543106 2938 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:43:41.543993 kubelet[2938]: I0912 17:43:41.543969 2938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:43:41.544561 kubelet[2938]: E0912 17:43:41.544223 2938 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:43:41.550168 kubelet[2938]: I0912 17:43:41.550126 2938 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:43:41.572087 kubelet[2938]: I0912 17:43:41.572062 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:43:41.572298 kubelet[2938]: I0912 17:43:41.572282 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:43:41.572378 kubelet[2938]: I0912 17:43:41.572370 2938 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:41.575566 kubelet[2938]: I0912 17:43:41.575513 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:43:41.578098 kubelet[2938]: I0912 17:43:41.576956 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:43:41.578098 kubelet[2938]: I0912 17:43:41.576979 2938 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:43:41.578098 kubelet[2938]: I0912 17:43:41.577000 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:43:41.578098 kubelet[2938]: I0912 17:43:41.577007 2938 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:43:41.578098 kubelet[2938]: E0912 17:43:41.577055 2938 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:43:41.579016 kubelet[2938]: I0912 17:43:41.578977 2938 policy_none.go:49] "None policy: Start" Sep 12 17:43:41.579016 kubelet[2938]: I0912 17:43:41.579005 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:43:41.579016 kubelet[2938]: I0912 17:43:41.579019 2938 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:43:41.586803 kubelet[2938]: W0912 17:43:41.586688 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:41.586803 kubelet[2938]: E0912 17:43:41.586733 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:41.591123 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:43:41.607990 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:43:41.626701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:43:41.628162 kubelet[2938]: I0912 17:43:41.628136 2938 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:43:41.628431 kubelet[2938]: I0912 17:43:41.628368 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:43:41.628514 kubelet[2938]: I0912 17:43:41.628389 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:43:41.629322 kubelet[2938]: I0912 17:43:41.629177 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:43:41.631281 kubelet[2938]: E0912 17:43:41.631244 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:43:41.631354 kubelet[2938]: E0912 17:43:41.631299 2938 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-83\" not found" Sep 12 17:43:41.691049 systemd[1]: Created slice kubepods-burstable-pode962c6cbce051d43f2969411b30c6402.slice - libcontainer container kubepods-burstable-pode962c6cbce051d43f2969411b30c6402.slice. Sep 12 17:43:41.713384 kubelet[2938]: E0912 17:43:41.713286 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:41.716640 systemd[1]: Created slice kubepods-burstable-pod7f6746e5af43db249813f90df75f785f.slice - libcontainer container kubepods-burstable-pod7f6746e5af43db249813f90df75f785f.slice. Sep 12 17:43:41.718735 kubelet[2938]: E0912 17:43:41.718702 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:41.722100 systemd[1]: Created slice kubepods-burstable-pod699a7fb92bee64ba7fa1a8e3f97a5883.slice - libcontainer container kubepods-burstable-pod699a7fb92bee64ba7fa1a8e3f97a5883.slice. Sep 12 17:43:41.724230 kubelet[2938]: E0912 17:43:41.724192 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:41.731774 kubelet[2938]: I0912 17:43:41.731693 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:41.732466 kubelet[2938]: E0912 17:43:41.732225 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.83:6443/api/v1/nodes\": dial tcp 172.31.16.83:6443: connect: connection refused" node="ip-172-31-16-83" Sep 12 17:43:41.738852 kubelet[2938]: I0912 17:43:41.738781 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:41.738852 kubelet[2938]: I0912 17:43:41.738827 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:41.738852 kubelet[2938]: I0912 17:43:41.738847 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:41.739177 kubelet[2938]: I0912 17:43:41.739148 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:41.739274 kubelet[2938]: I0912 17:43:41.739191 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/699a7fb92bee64ba7fa1a8e3f97a5883-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-83\" (UID: \"699a7fb92bee64ba7fa1a8e3f97a5883\") " pod="kube-system/kube-scheduler-ip-172-31-16-83" Sep 12 17:43:41.739274 kubelet[2938]: I0912 17:43:41.739206 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-ca-certs\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:41.739274 kubelet[2938]: I0912 17:43:41.739220 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:41.739274 kubelet[2938]: I0912 17:43:41.739237 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:41.739274 kubelet[2938]: I0912 17:43:41.739251 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:41.743354 kubelet[2938]: E0912 17:43:41.743311 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-83?timeout=10s\": dial tcp 172.31.16.83:6443: connect: connection refused" interval="400ms" Sep 12 17:43:41.937283 kubelet[2938]: I0912 17:43:41.937239 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:41.937853 kubelet[2938]: E0912 17:43:41.937776 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.83:6443/api/v1/nodes\": dial tcp 172.31.16.83:6443: connect: connection refused" node="ip-172-31-16-83" Sep 12 17:43:42.017781 containerd[1948]: time="2025-09-12T17:43:42.017645604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-83,Uid:e962c6cbce051d43f2969411b30c6402,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:42.030233 containerd[1948]: time="2025-09-12T17:43:42.030169572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-83,Uid:7f6746e5af43db249813f90df75f785f,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:42.031683 containerd[1948]: time="2025-09-12T17:43:42.031524223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-83,Uid:699a7fb92bee64ba7fa1a8e3f97a5883,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:42.155431 kubelet[2938]: E0912 17:43:42.144922 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-83?timeout=10s\": dial tcp 172.31.16.83:6443: connect: connection refused" interval="800ms" Sep 12 17:43:42.236006 containerd[1948]: time="2025-09-12T17:43:42.235939319Z" level=info msg="connecting to shim 4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d" address="unix:///run/containerd/s/67d1065a768eb2f5b4a2b7cbf648734595157a5c9110e2461e87297ad5162f72" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:42.247384 containerd[1948]: time="2025-09-12T17:43:42.247329485Z" level=info msg="connecting to shim fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237" address="unix:///run/containerd/s/3de95bab381e5957a5574a303d1912b4a61fd96d57a86c3290b0939591208247" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:42.248831 containerd[1948]: time="2025-09-12T17:43:42.248557987Z" level=info msg="connecting to shim 5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc" address="unix:///run/containerd/s/359fbb61b655d13380bf407bb9727f8e7ce085f0c1a26a8d9e022dfeeabf2f00" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:42.340552 kubelet[2938]: I0912 17:43:42.340407 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:42.341578 kubelet[2938]: E0912 17:43:42.341538 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.83:6443/api/v1/nodes\": dial tcp 172.31.16.83:6443: connect: connection refused" node="ip-172-31-16-83" Sep 12 17:43:42.360802 systemd[1]: Started cri-containerd-4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d.scope - libcontainer container 4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d. Sep 12 17:43:42.362769 kubelet[2938]: W0912 17:43:42.361092 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:42.362769 kubelet[2938]: E0912 17:43:42.361152 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:42.362027 systemd[1]: Started cri-containerd-5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc.scope - libcontainer container 5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc. Sep 12 17:43:42.363842 systemd[1]: Started cri-containerd-fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237.scope - libcontainer container fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237. Sep 12 17:43:42.466008 containerd[1948]: time="2025-09-12T17:43:42.464996811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-83,Uid:e962c6cbce051d43f2969411b30c6402,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc\"" Sep 12 17:43:42.466008 containerd[1948]: time="2025-09-12T17:43:42.465011638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-83,Uid:699a7fb92bee64ba7fa1a8e3f97a5883,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237\"" Sep 12 17:43:42.470845 containerd[1948]: time="2025-09-12T17:43:42.470806561Z" level=info msg="CreateContainer within sandbox \"5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:43:42.471118 containerd[1948]: time="2025-09-12T17:43:42.471090886Z" level=info msg="CreateContainer within sandbox \"fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:43:42.495977 containerd[1948]: time="2025-09-12T17:43:42.495859392Z" level=info msg="Container e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:42.512498 containerd[1948]: time="2025-09-12T17:43:42.512396900Z" level=info msg="Container c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:42.530441 containerd[1948]: time="2025-09-12T17:43:42.530368281Z" level=info msg="CreateContainer within sandbox \"fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\"" Sep 12 17:43:42.531159 containerd[1948]: time="2025-09-12T17:43:42.531133934Z" level=info msg="StartContainer for \"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\"" Sep 12 17:43:42.533143 containerd[1948]: time="2025-09-12T17:43:42.533111587Z" level=info msg="connecting to shim e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d" address="unix:///run/containerd/s/3de95bab381e5957a5574a303d1912b4a61fd96d57a86c3290b0939591208247" protocol=ttrpc version=3 Sep 12 17:43:42.534703 containerd[1948]: time="2025-09-12T17:43:42.534657093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-83,Uid:7f6746e5af43db249813f90df75f785f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d\"" Sep 12 17:43:42.541666 containerd[1948]: time="2025-09-12T17:43:42.541555436Z" level=info msg="CreateContainer within sandbox \"4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:43:42.547664 containerd[1948]: time="2025-09-12T17:43:42.547614770Z" level=info msg="CreateContainer within sandbox \"5c2c1b47e3ee8a7204bb4091585e24448c3fa2fcaabc0034a1d02ae5666648dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7\"" Sep 12 17:43:42.550052 containerd[1948]: time="2025-09-12T17:43:42.550012844Z" level=info msg="StartContainer for \"c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7\"" Sep 12 17:43:42.551767 containerd[1948]: time="2025-09-12T17:43:42.551716427Z" level=info msg="connecting to shim c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7" address="unix:///run/containerd/s/359fbb61b655d13380bf407bb9727f8e7ce085f0c1a26a8d9e022dfeeabf2f00" protocol=ttrpc version=3 Sep 12 17:43:42.567670 systemd[1]: Started cri-containerd-e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d.scope - libcontainer container e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d. Sep 12 17:43:42.583478 containerd[1948]: time="2025-09-12T17:43:42.583163438Z" level=info msg="Container 0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:42.607163 containerd[1948]: time="2025-09-12T17:43:42.606995668Z" level=info msg="CreateContainer within sandbox \"4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\"" Sep 12 17:43:42.610950 containerd[1948]: time="2025-09-12T17:43:42.609581420Z" level=info msg="StartContainer for \"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\"" Sep 12 17:43:42.615376 containerd[1948]: time="2025-09-12T17:43:42.615238785Z" level=info msg="connecting to shim 0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386" address="unix:///run/containerd/s/67d1065a768eb2f5b4a2b7cbf648734595157a5c9110e2461e87297ad5162f72" protocol=ttrpc version=3 Sep 12 17:43:42.619910 systemd[1]: Started cri-containerd-c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7.scope - libcontainer container c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7. Sep 12 17:43:42.664635 systemd[1]: Started cri-containerd-0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386.scope - libcontainer container 0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386. Sep 12 17:43:42.737455 containerd[1948]: time="2025-09-12T17:43:42.737240783Z" level=info msg="StartContainer for \"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\" returns successfully" Sep 12 17:43:42.740229 containerd[1948]: time="2025-09-12T17:43:42.740171385Z" level=info msg="StartContainer for \"c61d7e733e13a94dab2e27a6d60493495613c0c0bf739af81a27746aa838d0e7\" returns successfully" Sep 12 17:43:42.777428 kubelet[2938]: W0912 17:43:42.777337 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:42.778437 kubelet[2938]: E0912 17:43:42.777861 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:42.781827 containerd[1948]: time="2025-09-12T17:43:42.781784214Z" level=info msg="StartContainer for \"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\" returns successfully" Sep 12 17:43:42.946320 kubelet[2938]: E0912 17:43:42.946191 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-83?timeout=10s\": dial tcp 172.31.16.83:6443: connect: connection refused" interval="1.6s" Sep 12 17:43:42.963127 kubelet[2938]: W0912 17:43:42.963041 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-83&limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:42.963280 kubelet[2938]: E0912 17:43:42.963138 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-83&limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:43.124889 kubelet[2938]: W0912 17:43:43.124810 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.83:6443: connect: connection refused Sep 12 17:43:43.125056 kubelet[2938]: E0912 17:43:43.124901 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.83:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:43.145190 kubelet[2938]: I0912 17:43:43.145155 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:43.145554 kubelet[2938]: E0912 17:43:43.145525 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.83:6443/api/v1/nodes\": dial tcp 172.31.16.83:6443: connect: connection refused" node="ip-172-31-16-83" Sep 12 17:43:43.613920 kubelet[2938]: E0912 17:43:43.613649 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:43.615878 kubelet[2938]: E0912 17:43:43.615855 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:43.621769 kubelet[2938]: E0912 17:43:43.621591 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:44.624377 kubelet[2938]: E0912 17:43:44.624251 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:44.625800 kubelet[2938]: E0912 17:43:44.625096 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:44.626080 kubelet[2938]: E0912 17:43:44.625314 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:44.750469 kubelet[2938]: I0912 17:43:44.750445 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:45.645466 kubelet[2938]: E0912 17:43:45.644610 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:45.645466 kubelet[2938]: E0912 17:43:45.645078 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:45.743140 kubelet[2938]: E0912 17:43:45.743105 2938 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-83\" not found" node="ip-172-31-16-83" Sep 12 17:43:45.832496 kubelet[2938]: E0912 17:43:45.832361 2938 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-83.186499f2872f13ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-83,UID:ip-172-31-16-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-83,},FirstTimestamp:2025-09-12 17:43:41.510480895 +0000 UTC m=+0.747461265,LastTimestamp:2025-09-12 17:43:41.510480895 +0000 UTC m=+0.747461265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-83,}" Sep 12 17:43:45.887018 kubelet[2938]: E0912 17:43:45.886913 2938 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-83.186499f28931bbd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-83,UID:ip-172-31-16-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-16-83,},FirstTimestamp:2025-09-12 17:43:41.544209368 +0000 UTC m=+0.781189745,LastTimestamp:2025-09-12 17:43:41.544209368 +0000 UTC m=+0.781189745,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-83,}" Sep 12 17:43:45.895359 kubelet[2938]: I0912 17:43:45.894221 2938 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-83" Sep 12 17:43:45.895359 kubelet[2938]: E0912 17:43:45.894266 2938 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-83\": node \"ip-172-31-16-83\" not found" Sep 12 17:43:45.897381 kubelet[2938]: I0912 17:43:45.897291 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:45.938702 kubelet[2938]: I0912 17:43:45.938652 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:45.963665 kubelet[2938]: E0912 17:43:45.963604 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:45.963919 kubelet[2938]: E0912 17:43:45.963894 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:45.963919 kubelet[2938]: I0912 17:43:45.963925 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:45.966699 kubelet[2938]: E0912 17:43:45.966663 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:45.966699 kubelet[2938]: I0912 17:43:45.966699 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-83" Sep 12 17:43:45.968777 kubelet[2938]: E0912 17:43:45.968745 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-83" Sep 12 17:43:46.510968 kubelet[2938]: I0912 17:43:46.510907 2938 apiserver.go:52] "Watching apiserver" Sep 12 17:43:46.538325 kubelet[2938]: I0912 17:43:46.538243 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:43:46.639283 kubelet[2938]: I0912 17:43:46.639250 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:47.901838 systemd[1]: Reload requested from client PID 3210 ('systemctl') (unit session-9.scope)... Sep 12 17:43:47.901859 systemd[1]: Reloading... Sep 12 17:43:48.041452 zram_generator::config[3250]: No configuration found. Sep 12 17:43:48.316241 systemd[1]: Reloading finished in 413 ms. Sep 12 17:43:48.342274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:48.355828 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:43:48.356157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:48.356238 systemd[1]: kubelet.service: Consumed 1.145s CPU time, 128.3M memory peak. Sep 12 17:43:48.358670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:48.677474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:48.689078 (kubelet)[3314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:43:48.774655 kubelet[3314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:48.774655 kubelet[3314]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:43:48.774655 kubelet[3314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:48.775119 kubelet[3314]: I0912 17:43:48.774764 3314 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:43:48.783448 kubelet[3314]: I0912 17:43:48.783391 3314 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:43:48.784462 kubelet[3314]: I0912 17:43:48.783673 3314 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:43:48.784462 kubelet[3314]: I0912 17:43:48.784377 3314 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:43:48.787361 kubelet[3314]: I0912 17:43:48.787324 3314 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:43:48.790117 kubelet[3314]: I0912 17:43:48.789922 3314 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:43:48.794602 kubelet[3314]: I0912 17:43:48.794575 3314 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:43:48.797803 kubelet[3314]: I0912 17:43:48.797774 3314 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:43:48.798089 kubelet[3314]: I0912 17:43:48.798055 3314 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:43:48.798293 kubelet[3314]: I0912 17:43:48.798087 3314 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:43:48.798453 kubelet[3314]: I0912 17:43:48.798304 3314 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:43:48.798453 kubelet[3314]: I0912 17:43:48.798320 3314 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:43:48.798453 kubelet[3314]: I0912 17:43:48.798376 3314 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:48.798609 kubelet[3314]: I0912 17:43:48.798586 3314 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:43:48.799032 kubelet[3314]: I0912 17:43:48.799013 3314 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:43:48.799104 kubelet[3314]: I0912 17:43:48.799098 3314 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:43:48.799148 kubelet[3314]: I0912 17:43:48.799113 3314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:43:48.804238 kubelet[3314]: I0912 17:43:48.804143 3314 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:43:48.806043 kubelet[3314]: I0912 17:43:48.805233 3314 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:43:48.807709 kubelet[3314]: I0912 17:43:48.806885 3314 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:43:48.808108 kubelet[3314]: I0912 17:43:48.808013 3314 server.go:1287] "Started kubelet" Sep 12 17:43:48.819203 kubelet[3314]: I0912 17:43:48.819140 3314 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:43:48.822500 kubelet[3314]: I0912 17:43:48.821819 3314 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:43:48.822500 kubelet[3314]: I0912 17:43:48.822345 3314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:43:48.824953 kubelet[3314]: I0912 17:43:48.824405 3314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:43:48.824953 kubelet[3314]: I0912 17:43:48.824944 3314 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:43:48.837091 kubelet[3314]: I0912 17:43:48.837057 3314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:43:48.840100 kubelet[3314]: I0912 17:43:48.840071 3314 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:43:48.844185 kubelet[3314]: I0912 17:43:48.844157 3314 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:43:48.844701 kubelet[3314]: I0912 17:43:48.844280 3314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:43:48.846140 kubelet[3314]: I0912 17:43:48.845848 3314 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:43:48.846140 kubelet[3314]: I0912 17:43:48.846047 3314 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:43:48.848583 kubelet[3314]: I0912 17:43:48.848483 3314 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:43:48.850440 kubelet[3314]: I0912 17:43:48.850284 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:43:48.853928 kubelet[3314]: I0912 17:43:48.853519 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:43:48.853928 kubelet[3314]: I0912 17:43:48.853559 3314 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:43:48.853928 kubelet[3314]: I0912 17:43:48.853580 3314 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:43:48.853928 kubelet[3314]: I0912 17:43:48.853590 3314 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:43:48.853928 kubelet[3314]: E0912 17:43:48.853643 3314 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:43:48.921432 kubelet[3314]: I0912 17:43:48.921394 3314 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:43:48.921682 kubelet[3314]: I0912 17:43:48.921646 3314 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:43:48.921757 kubelet[3314]: I0912 17:43:48.921718 3314 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:48.921949 kubelet[3314]: I0912 17:43:48.921927 3314 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:43:48.922007 kubelet[3314]: I0912 17:43:48.921944 3314 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:43:48.922007 kubelet[3314]: I0912 17:43:48.921968 3314 policy_none.go:49] "None policy: Start" Sep 12 17:43:48.922007 kubelet[3314]: I0912 17:43:48.921983 3314 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:43:48.922007 kubelet[3314]: I0912 17:43:48.921996 3314 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:43:48.922164 kubelet[3314]: I0912 17:43:48.922140 3314 state_mem.go:75] "Updated machine memory state" Sep 12 17:43:48.931851 sudo[3345]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:43:48.932258 sudo[3345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:43:48.943614 kubelet[3314]: I0912 17:43:48.943155 3314 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:43:48.943614 kubelet[3314]: I0912 17:43:48.943371 3314 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:43:48.943614 kubelet[3314]: I0912 17:43:48.943385 3314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:43:48.944248 kubelet[3314]: I0912 17:43:48.944175 3314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:43:48.954753 kubelet[3314]: I0912 17:43:48.954719 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-83" Sep 12 17:43:48.956189 kubelet[3314]: I0912 17:43:48.956158 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:48.956809 kubelet[3314]: I0912 17:43:48.956591 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:48.959131 kubelet[3314]: E0912 17:43:48.958711 3314 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:43:48.981513 kubelet[3314]: E0912 17:43:48.981477 3314 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-83\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:49.051293 kubelet[3314]: I0912 17:43:49.051161 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-ca-certs\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:49.051464 kubelet[3314]: I0912 17:43:49.051319 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:49.051464 kubelet[3314]: I0912 17:43:49.051347 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:49.051464 kubelet[3314]: I0912 17:43:49.051407 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e962c6cbce051d43f2969411b30c6402-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-83\" (UID: \"e962c6cbce051d43f2969411b30c6402\") " pod="kube-system/kube-apiserver-ip-172-31-16-83" Sep 12 17:43:49.051611 kubelet[3314]: I0912 17:43:49.051481 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:49.051611 kubelet[3314]: I0912 17:43:49.051507 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:49.051611 kubelet[3314]: I0912 17:43:49.051553 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:49.051724 kubelet[3314]: I0912 17:43:49.051579 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f6746e5af43db249813f90df75f785f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-83\" (UID: \"7f6746e5af43db249813f90df75f785f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-83" Sep 12 17:43:49.051724 kubelet[3314]: I0912 17:43:49.051637 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/699a7fb92bee64ba7fa1a8e3f97a5883-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-83\" (UID: \"699a7fb92bee64ba7fa1a8e3f97a5883\") " pod="kube-system/kube-scheduler-ip-172-31-16-83" Sep 12 17:43:49.062408 kubelet[3314]: I0912 17:43:49.062330 3314 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-83" Sep 12 17:43:49.073476 kubelet[3314]: I0912 17:43:49.073314 3314 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-83" Sep 12 17:43:49.073750 kubelet[3314]: I0912 17:43:49.073642 3314 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-83" Sep 12 17:43:49.393288 sudo[3345]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:49.801602 kubelet[3314]: I0912 17:43:49.800168 3314 apiserver.go:52] "Watching apiserver" Sep 12 17:43:49.846037 kubelet[3314]: I0912 17:43:49.845995 3314 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:43:49.950137 kubelet[3314]: I0912 17:43:49.950056 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-83" podStartSLOduration=1.950035981 podStartE2EDuration="1.950035981s" podCreationTimestamp="2025-09-12 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:49.920011054 +0000 UTC m=+1.223934680" watchObservedRunningTime="2025-09-12 17:43:49.950035981 +0000 UTC m=+1.253959608" Sep 12 17:43:49.950614 kubelet[3314]: I0912 17:43:49.950215 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-83" podStartSLOduration=1.950204355 podStartE2EDuration="1.950204355s" podCreationTimestamp="2025-09-12 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:49.945594389 +0000 UTC m=+1.249518014" watchObservedRunningTime="2025-09-12 17:43:49.950204355 +0000 UTC m=+1.254127981" Sep 12 17:43:49.996094 kubelet[3314]: I0912 17:43:49.995792 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-83" podStartSLOduration=3.995768077 podStartE2EDuration="3.995768077s" podCreationTimestamp="2025-09-12 17:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:49.973083545 +0000 UTC m=+1.277007170" watchObservedRunningTime="2025-09-12 17:43:49.995768077 +0000 UTC m=+1.299691696" Sep 12 17:43:51.673731 sudo[2373]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:51.695762 sshd[2361]: Connection closed by 139.178.68.195 port 58678 Sep 12 17:43:51.696892 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:51.703047 systemd[1]: sshd@8-172.31.16.83:22-139.178.68.195:58678.service: Deactivated successfully. Sep 12 17:43:51.706715 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:43:51.706969 systemd[1]: session-9.scope: Consumed 5.253s CPU time, 209.9M memory peak. Sep 12 17:43:51.709031 systemd-logind[1901]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:43:51.711795 systemd-logind[1901]: Removed session 9. Sep 12 17:43:53.020907 update_engine[1910]: I20250912 17:43:53.020809 1910 update_attempter.cc:509] Updating boot flags... Sep 12 17:43:54.519818 kubelet[3314]: I0912 17:43:54.519787 3314 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:43:54.521788 containerd[1948]: time="2025-09-12T17:43:54.521052684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:43:54.523329 kubelet[3314]: I0912 17:43:54.522323 3314 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:43:55.505882 systemd[1]: Created slice kubepods-besteffort-pod007c818e_7295_4d96_b115_826673f48d57.slice - libcontainer container kubepods-besteffort-pod007c818e_7295_4d96_b115_826673f48d57.slice. Sep 12 17:43:55.519210 systemd[1]: Created slice kubepods-burstable-pod9bad6648_0b92_409c_b7be_09b5f6adab99.slice - libcontainer container kubepods-burstable-pod9bad6648_0b92_409c_b7be_09b5f6adab99.slice. Sep 12 17:43:55.608397 kubelet[3314]: I0912 17:43:55.608350 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cni-path\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.608397 kubelet[3314]: I0912 17:43:55.608396 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-lib-modules\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608577 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/007c818e-7295-4d96-b115-826673f48d57-kube-proxy\") pod \"kube-proxy-8qxvc\" (UID: \"007c818e-7295-4d96-b115-826673f48d57\") " pod="kube-system/kube-proxy-8qxvc" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608639 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/007c818e-7295-4d96-b115-826673f48d57-lib-modules\") pod \"kube-proxy-8qxvc\" (UID: \"007c818e-7295-4d96-b115-826673f48d57\") " pod="kube-system/kube-proxy-8qxvc" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608657 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qks\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-kube-api-access-66qks\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608755 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-hostproc\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608808 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-hubble-tls\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.608928 kubelet[3314]: I0912 17:43:55.608824 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/007c818e-7295-4d96-b115-826673f48d57-xtables-lock\") pod \"kube-proxy-8qxvc\" (UID: \"007c818e-7295-4d96-b115-826673f48d57\") " pod="kube-system/kube-proxy-8qxvc" Sep 12 17:43:55.609094 kubelet[3314]: I0912 17:43:55.608869 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ctr\" (UniqueName: \"kubernetes.io/projected/007c818e-7295-4d96-b115-826673f48d57-kube-api-access-h8ctr\") pod \"kube-proxy-8qxvc\" (UID: \"007c818e-7295-4d96-b115-826673f48d57\") " pod="kube-system/kube-proxy-8qxvc" Sep 12 17:43:55.609094 kubelet[3314]: I0912 17:43:55.608885 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-run\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609094 kubelet[3314]: I0912 17:43:55.608900 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-bpf-maps\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609094 kubelet[3314]: I0912 17:43:55.608953 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-cgroup\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609094 kubelet[3314]: I0912 17:43:55.608974 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-kernel\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609217 kubelet[3314]: I0912 17:43:55.609021 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-net\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609217 kubelet[3314]: I0912 17:43:55.609039 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-etc-cni-netd\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609217 kubelet[3314]: I0912 17:43:55.609056 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-xtables-lock\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609217 kubelet[3314]: I0912 17:43:55.609100 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bad6648-0b92-409c-b7be-09b5f6adab99-clustermesh-secrets\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.609217 kubelet[3314]: I0912 17:43:55.609121 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-config-path\") pod \"cilium-bjrqn\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " pod="kube-system/cilium-bjrqn" Sep 12 17:43:55.660294 kubelet[3314]: I0912 17:43:55.660188 3314 status_manager.go:890] "Failed to get status for pod" podUID="856d4792-966f-4ba1-a20f-e8d9d4a9bdb0" pod="kube-system/cilium-operator-6c4d7847fc-q55bq" err="pods \"cilium-operator-6c4d7847fc-q55bq\" is forbidden: User \"system:node:ip-172-31-16-83\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-83' and this object" Sep 12 17:43:55.666234 systemd[1]: Created slice kubepods-besteffort-pod856d4792_966f_4ba1_a20f_e8d9d4a9bdb0.slice - libcontainer container kubepods-besteffort-pod856d4792_966f_4ba1_a20f_e8d9d4a9bdb0.slice. Sep 12 17:43:55.710522 kubelet[3314]: I0912 17:43:55.710486 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-q55bq\" (UID: \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\") " pod="kube-system/cilium-operator-6c4d7847fc-q55bq" Sep 12 17:43:55.710807 kubelet[3314]: I0912 17:43:55.710752 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhzg\" (UniqueName: \"kubernetes.io/projected/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-kube-api-access-crhzg\") pod \"cilium-operator-6c4d7847fc-q55bq\" (UID: \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\") " pod="kube-system/cilium-operator-6c4d7847fc-q55bq" Sep 12 17:43:55.818963 containerd[1948]: time="2025-09-12T17:43:55.818917264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qxvc,Uid:007c818e-7295-4d96-b115-826673f48d57,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:55.825509 containerd[1948]: time="2025-09-12T17:43:55.825179216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bjrqn,Uid:9bad6648-0b92-409c-b7be-09b5f6adab99,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:55.862902 containerd[1948]: time="2025-09-12T17:43:55.862846249Z" level=info msg="connecting to shim 88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484" address="unix:///run/containerd/s/3ec76e8f65a3f631f245d3df2adc984c8e6431df7dbfdb424af8031c393733e8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:55.877999 containerd[1948]: time="2025-09-12T17:43:55.877564818Z" level=info msg="connecting to shim 326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:55.904756 systemd[1]: Started cri-containerd-88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484.scope - libcontainer container 88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484. Sep 12 17:43:55.927127 systemd[1]: Started cri-containerd-326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c.scope - libcontainer container 326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c. Sep 12 17:43:55.970981 containerd[1948]: time="2025-09-12T17:43:55.970941761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q55bq,Uid:856d4792-966f-4ba1-a20f-e8d9d4a9bdb0,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:56.008041 containerd[1948]: time="2025-09-12T17:43:56.005300886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qxvc,Uid:007c818e-7295-4d96-b115-826673f48d57,Namespace:kube-system,Attempt:0,} returns sandbox id \"88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484\"" Sep 12 17:43:56.015422 containerd[1948]: time="2025-09-12T17:43:56.014037227Z" level=info msg="CreateContainer within sandbox \"88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:43:56.040299 containerd[1948]: time="2025-09-12T17:43:56.040251908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bjrqn,Uid:9bad6648-0b92-409c-b7be-09b5f6adab99,Namespace:kube-system,Attempt:0,} returns sandbox id \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\"" Sep 12 17:43:56.047094 containerd[1948]: time="2025-09-12T17:43:56.047055775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:43:56.056974 containerd[1948]: time="2025-09-12T17:43:56.056923925Z" level=info msg="Container c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:43:56.096095 containerd[1948]: time="2025-09-12T17:43:56.095657887Z" level=info msg="connecting to shim 3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48" address="unix:///run/containerd/s/014c6f7faf77e90e27ac2e0750458b7ec05dcb71dc84ac555edc33aff26e4b8c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:43:56.097955 containerd[1948]: time="2025-09-12T17:43:56.097678765Z" level=info msg="CreateContainer within sandbox \"88c8b7f0e55b0e6cd53decf8c1776ca9eb31c62a1f1c066f3a7df450a6627484\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905\"" Sep 12 17:43:56.100377 containerd[1948]: time="2025-09-12T17:43:56.100331456Z" level=info msg="StartContainer for \"c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905\"" Sep 12 17:43:56.105349 containerd[1948]: time="2025-09-12T17:43:56.105216769Z" level=info msg="connecting to shim c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905" address="unix:///run/containerd/s/3ec76e8f65a3f631f245d3df2adc984c8e6431df7dbfdb424af8031c393733e8" protocol=ttrpc version=3 Sep 12 17:43:56.135675 systemd[1]: Started cri-containerd-3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48.scope - libcontainer container 3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48. Sep 12 17:43:56.141723 systemd[1]: Started cri-containerd-c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905.scope - libcontainer container c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905. Sep 12 17:43:56.226437 containerd[1948]: time="2025-09-12T17:43:56.226214476Z" level=info msg="StartContainer for \"c848723051b49f4eb29a94c600857ed10d484cadc387aea8a1e39459c2dab905\" returns successfully" Sep 12 17:43:56.240906 containerd[1948]: time="2025-09-12T17:43:56.240872238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q55bq,Uid:856d4792-966f-4ba1-a20f-e8d9d4a9bdb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\"" Sep 12 17:43:56.928057 kubelet[3314]: I0912 17:43:56.927876 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8qxvc" podStartSLOduration=1.927858692 podStartE2EDuration="1.927858692s" podCreationTimestamp="2025-09-12 17:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:56.927409495 +0000 UTC m=+8.231333118" watchObservedRunningTime="2025-09-12 17:43:56.927858692 +0000 UTC m=+8.231782317" Sep 12 17:44:03.636088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404490122.mount: Deactivated successfully. Sep 12 17:44:06.313842 containerd[1948]: time="2025-09-12T17:44:06.313790462Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:06.315983 containerd[1948]: time="2025-09-12T17:44:06.315914976Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:44:06.319433 containerd[1948]: time="2025-09-12T17:44:06.318016490Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:06.320063 containerd[1948]: time="2025-09-12T17:44:06.320024429Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.272920912s" Sep 12 17:44:06.320236 containerd[1948]: time="2025-09-12T17:44:06.320215790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:44:06.322882 containerd[1948]: time="2025-09-12T17:44:06.322845396Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:44:06.326460 containerd[1948]: time="2025-09-12T17:44:06.326401736Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:44:06.361617 containerd[1948]: time="2025-09-12T17:44:06.361581655Z" level=info msg="Container 88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:06.388037 containerd[1948]: time="2025-09-12T17:44:06.387994784Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\"" Sep 12 17:44:06.389466 containerd[1948]: time="2025-09-12T17:44:06.388787852Z" level=info msg="StartContainer for \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\"" Sep 12 17:44:06.390614 containerd[1948]: time="2025-09-12T17:44:06.390580740Z" level=info msg="connecting to shim 88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" protocol=ttrpc version=3 Sep 12 17:44:06.430636 systemd[1]: Started cri-containerd-88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174.scope - libcontainer container 88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174. Sep 12 17:44:06.466364 containerd[1948]: time="2025-09-12T17:44:06.466304043Z" level=info msg="StartContainer for \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" returns successfully" Sep 12 17:44:06.476904 systemd[1]: cri-containerd-88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174.scope: Deactivated successfully. Sep 12 17:44:06.505548 containerd[1948]: time="2025-09-12T17:44:06.505495212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" id:\"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" pid:3907 exited_at:{seconds:1757699046 nanos:480095042}" Sep 12 17:44:06.514454 containerd[1948]: time="2025-09-12T17:44:06.514391042Z" level=info msg="received exit event container_id:\"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" id:\"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" pid:3907 exited_at:{seconds:1757699046 nanos:480095042}" Sep 12 17:44:06.551261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174-rootfs.mount: Deactivated successfully. Sep 12 17:44:07.003583 containerd[1948]: time="2025-09-12T17:44:07.003542101Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:44:07.020814 containerd[1948]: time="2025-09-12T17:44:07.020771425Z" level=info msg="Container fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:07.032102 containerd[1948]: time="2025-09-12T17:44:07.032055654Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\"" Sep 12 17:44:07.033049 containerd[1948]: time="2025-09-12T17:44:07.033019387Z" level=info msg="StartContainer for \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\"" Sep 12 17:44:07.034328 containerd[1948]: time="2025-09-12T17:44:07.034289664Z" level=info msg="connecting to shim fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" protocol=ttrpc version=3 Sep 12 17:44:07.062656 systemd[1]: Started cri-containerd-fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291.scope - libcontainer container fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291. Sep 12 17:44:07.100239 containerd[1948]: time="2025-09-12T17:44:07.100122157Z" level=info msg="StartContainer for \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" returns successfully" Sep 12 17:44:07.119642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:44:07.121256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:44:07.121904 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:44:07.125166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:44:07.125878 systemd[1]: cri-containerd-fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291.scope: Deactivated successfully. Sep 12 17:44:07.141309 containerd[1948]: time="2025-09-12T17:44:07.140960237Z" level=info msg="received exit event container_id:\"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" id:\"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" pid:3954 exited_at:{seconds:1757699047 nanos:133361015}" Sep 12 17:44:07.141309 containerd[1948]: time="2025-09-12T17:44:07.141237964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" id:\"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" pid:3954 exited_at:{seconds:1757699047 nanos:133361015}" Sep 12 17:44:07.176768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:44:07.667804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456680078.mount: Deactivated successfully. Sep 12 17:44:08.015825 containerd[1948]: time="2025-09-12T17:44:08.015367328Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:44:08.051015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181514794.mount: Deactivated successfully. Sep 12 17:44:08.053432 containerd[1948]: time="2025-09-12T17:44:08.052534161Z" level=info msg="Container bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:08.078621 containerd[1948]: time="2025-09-12T17:44:08.078572984Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\"" Sep 12 17:44:08.082536 containerd[1948]: time="2025-09-12T17:44:08.082409879Z" level=info msg="StartContainer for \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\"" Sep 12 17:44:08.085870 containerd[1948]: time="2025-09-12T17:44:08.085828048Z" level=info msg="connecting to shim bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" protocol=ttrpc version=3 Sep 12 17:44:08.128951 systemd[1]: Started cri-containerd-bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3.scope - libcontainer container bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3. Sep 12 17:44:08.229397 containerd[1948]: time="2025-09-12T17:44:08.228760084Z" level=info msg="StartContainer for \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" returns successfully" Sep 12 17:44:08.230263 systemd[1]: cri-containerd-bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3.scope: Deactivated successfully. Sep 12 17:44:08.237833 containerd[1948]: time="2025-09-12T17:44:08.237788640Z" level=info msg="received exit event container_id:\"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" id:\"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" pid:4015 exited_at:{seconds:1757699048 nanos:236947141}" Sep 12 17:44:08.255946 containerd[1948]: time="2025-09-12T17:44:08.255661412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" id:\"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" pid:4015 exited_at:{seconds:1757699048 nanos:236947141}" Sep 12 17:44:08.542857 containerd[1948]: time="2025-09-12T17:44:08.542791995Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:08.543496 containerd[1948]: time="2025-09-12T17:44:08.543464295Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.220585888s" Sep 12 17:44:08.543496 containerd[1948]: time="2025-09-12T17:44:08.543495218Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:44:08.543842 containerd[1948]: time="2025-09-12T17:44:08.543789751Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:08.544471 containerd[1948]: time="2025-09-12T17:44:08.544285692Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:44:08.547004 containerd[1948]: time="2025-09-12T17:44:08.546949847Z" level=info msg="CreateContainer within sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:44:08.563355 containerd[1948]: time="2025-09-12T17:44:08.562747563Z" level=info msg="Container b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:08.577330 containerd[1948]: time="2025-09-12T17:44:08.577286530Z" level=info msg="CreateContainer within sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\"" Sep 12 17:44:08.578055 containerd[1948]: time="2025-09-12T17:44:08.578029924Z" level=info msg="StartContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\"" Sep 12 17:44:08.579322 containerd[1948]: time="2025-09-12T17:44:08.579264405Z" level=info msg="connecting to shim b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b" address="unix:///run/containerd/s/014c6f7faf77e90e27ac2e0750458b7ec05dcb71dc84ac555edc33aff26e4b8c" protocol=ttrpc version=3 Sep 12 17:44:08.609654 systemd[1]: Started cri-containerd-b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b.scope - libcontainer container b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b. Sep 12 17:44:08.652069 containerd[1948]: time="2025-09-12T17:44:08.652025994Z" level=info msg="StartContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" returns successfully" Sep 12 17:44:09.023724 containerd[1948]: time="2025-09-12T17:44:09.023517827Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:44:09.039490 containerd[1948]: time="2025-09-12T17:44:09.039102871Z" level=info msg="Container d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:09.059619 containerd[1948]: time="2025-09-12T17:44:09.059565695Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\"" Sep 12 17:44:09.060790 containerd[1948]: time="2025-09-12T17:44:09.060752375Z" level=info msg="StartContainer for \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\"" Sep 12 17:44:09.063832 containerd[1948]: time="2025-09-12T17:44:09.063628067Z" level=info msg="connecting to shim d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" protocol=ttrpc version=3 Sep 12 17:44:09.122601 systemd[1]: Started cri-containerd-d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7.scope - libcontainer container d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7. Sep 12 17:44:09.218806 systemd[1]: cri-containerd-d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7.scope: Deactivated successfully. Sep 12 17:44:09.223838 containerd[1948]: time="2025-09-12T17:44:09.223348463Z" level=info msg="received exit event container_id:\"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" id:\"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" pid:4093 exited_at:{seconds:1757699049 nanos:221599153}" Sep 12 17:44:09.225907 containerd[1948]: time="2025-09-12T17:44:09.225871115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" id:\"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" pid:4093 exited_at:{seconds:1757699049 nanos:221599153}" Sep 12 17:44:09.243873 containerd[1948]: time="2025-09-12T17:44:09.243829571Z" level=info msg="StartContainer for \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" returns successfully" Sep 12 17:44:10.038029 containerd[1948]: time="2025-09-12T17:44:10.037987874Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:44:10.060940 kubelet[3314]: I0912 17:44:10.060826 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-q55bq" podStartSLOduration=2.758039997 podStartE2EDuration="15.060802839s" podCreationTimestamp="2025-09-12 17:43:55 +0000 UTC" firstStartedPulling="2025-09-12 17:43:56.242504615 +0000 UTC m=+7.546428221" lastFinishedPulling="2025-09-12 17:44:08.545267458 +0000 UTC m=+19.849191063" observedRunningTime="2025-09-12 17:44:09.178669485 +0000 UTC m=+20.482593111" watchObservedRunningTime="2025-09-12 17:44:10.060802839 +0000 UTC m=+21.364726457" Sep 12 17:44:10.081046 containerd[1948]: time="2025-09-12T17:44:10.076950573Z" level=info msg="Container 7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:10.102594 containerd[1948]: time="2025-09-12T17:44:10.102546158Z" level=info msg="CreateContainer within sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\"" Sep 12 17:44:10.103327 containerd[1948]: time="2025-09-12T17:44:10.103292793Z" level=info msg="StartContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\"" Sep 12 17:44:10.104926 containerd[1948]: time="2025-09-12T17:44:10.104875915Z" level=info msg="connecting to shim 7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2" address="unix:///run/containerd/s/55b149e5cc83ddb8fece707c5fcbe038e6af5fa3c7b07d9539e8302c5ac62bbc" protocol=ttrpc version=3 Sep 12 17:44:10.141672 systemd[1]: Started cri-containerd-7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2.scope - libcontainer container 7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2. Sep 12 17:44:10.207634 containerd[1948]: time="2025-09-12T17:44:10.207563486Z" level=info msg="StartContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" returns successfully" Sep 12 17:44:10.363766 containerd[1948]: time="2025-09-12T17:44:10.363725778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" id:\"2bfaf2ca2b3a8e7d1cf2d69bca2aa189240196baa2e7811ae68100e76acb1c7c\" pid:4158 exited_at:{seconds:1757699050 nanos:363083784}" Sep 12 17:44:10.435930 kubelet[3314]: I0912 17:44:10.435901 3314 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:44:10.504166 systemd[1]: Created slice kubepods-burstable-pod0085cfc4_5224_4ea5_9bdc_7849ca5ce3a6.slice - libcontainer container kubepods-burstable-pod0085cfc4_5224_4ea5_9bdc_7849ca5ce3a6.slice. Sep 12 17:44:10.515925 systemd[1]: Created slice kubepods-burstable-pod0b62f2eb_8a8b_461a_9908_800d8abb5e81.slice - libcontainer container kubepods-burstable-pod0b62f2eb_8a8b_461a_9908_800d8abb5e81.slice. Sep 12 17:44:10.556065 kubelet[3314]: I0912 17:44:10.556026 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472cz\" (UniqueName: \"kubernetes.io/projected/0b62f2eb-8a8b-461a-9908-800d8abb5e81-kube-api-access-472cz\") pod \"coredns-668d6bf9bc-nsl6v\" (UID: \"0b62f2eb-8a8b-461a-9908-800d8abb5e81\") " pod="kube-system/coredns-668d6bf9bc-nsl6v" Sep 12 17:44:10.556065 kubelet[3314]: I0912 17:44:10.556070 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b62f2eb-8a8b-461a-9908-800d8abb5e81-config-volume\") pod \"coredns-668d6bf9bc-nsl6v\" (UID: \"0b62f2eb-8a8b-461a-9908-800d8abb5e81\") " pod="kube-system/coredns-668d6bf9bc-nsl6v" Sep 12 17:44:10.556774 kubelet[3314]: I0912 17:44:10.556099 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6-config-volume\") pod \"coredns-668d6bf9bc-qtkmb\" (UID: \"0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6\") " pod="kube-system/coredns-668d6bf9bc-qtkmb" Sep 12 17:44:10.556774 kubelet[3314]: I0912 17:44:10.556115 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v97mh\" (UniqueName: \"kubernetes.io/projected/0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6-kube-api-access-v97mh\") pod \"coredns-668d6bf9bc-qtkmb\" (UID: \"0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6\") " pod="kube-system/coredns-668d6bf9bc-qtkmb" Sep 12 17:44:10.812478 containerd[1948]: time="2025-09-12T17:44:10.812291499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qtkmb,Uid:0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6,Namespace:kube-system,Attempt:0,}" Sep 12 17:44:10.828469 containerd[1948]: time="2025-09-12T17:44:10.828115199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nsl6v,Uid:0b62f2eb-8a8b-461a-9908-800d8abb5e81,Namespace:kube-system,Attempt:0,}" Sep 12 17:44:11.096151 kubelet[3314]: I0912 17:44:11.096082 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bjrqn" podStartSLOduration=5.818423029 podStartE2EDuration="16.096063429s" podCreationTimestamp="2025-09-12 17:43:55 +0000 UTC" firstStartedPulling="2025-09-12 17:43:56.044507183 +0000 UTC m=+7.348430797" lastFinishedPulling="2025-09-12 17:44:06.322147569 +0000 UTC m=+17.626071197" observedRunningTime="2025-09-12 17:44:11.0937319 +0000 UTC m=+22.397655536" watchObservedRunningTime="2025-09-12 17:44:11.096063429 +0000 UTC m=+22.399987054" Sep 12 17:44:12.920670 systemd-networkd[1817]: cilium_host: Link UP Sep 12 17:44:12.920797 systemd-networkd[1817]: cilium_net: Link UP Sep 12 17:44:12.920928 systemd-networkd[1817]: cilium_net: Gained carrier Sep 12 17:44:12.921048 systemd-networkd[1817]: cilium_host: Gained carrier Sep 12 17:44:12.921622 (udev-worker)[4218]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:12.923921 (udev-worker)[4256]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:13.065177 systemd-networkd[1817]: cilium_vxlan: Link UP Sep 12 17:44:13.065194 systemd-networkd[1817]: cilium_vxlan: Gained carrier Sep 12 17:44:13.377712 systemd-networkd[1817]: cilium_net: Gained IPv6LL Sep 12 17:44:13.402567 systemd-networkd[1817]: cilium_host: Gained IPv6LL Sep 12 17:44:13.892450 kernel: NET: Registered PF_ALG protocol family Sep 12 17:44:14.666827 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:14.666922 systemd-networkd[1817]: lxc_health: Link UP Sep 12 17:44:14.676530 systemd-networkd[1817]: lxc_health: Gained carrier Sep 12 17:44:14.941822 kernel: eth0: renamed from tmpf8e96 Sep 12 17:44:14.939341 systemd-networkd[1817]: lxc65538f2cae66: Link UP Sep 12 17:44:14.950506 kernel: eth0: renamed from tmp027f3 Sep 12 17:44:14.955059 systemd-networkd[1817]: lxc77d67b5d9e83: Link UP Sep 12 17:44:14.955407 systemd-networkd[1817]: lxc65538f2cae66: Gained carrier Sep 12 17:44:14.960907 systemd-networkd[1817]: lxc77d67b5d9e83: Gained carrier Sep 12 17:44:15.089718 systemd-networkd[1817]: cilium_vxlan: Gained IPv6LL Sep 12 17:44:16.497734 systemd-networkd[1817]: lxc_health: Gained IPv6LL Sep 12 17:44:16.753791 systemd-networkd[1817]: lxc65538f2cae66: Gained IPv6LL Sep 12 17:44:16.945759 systemd-networkd[1817]: lxc77d67b5d9e83: Gained IPv6LL Sep 12 17:44:18.995142 ntpd[1870]: Listen normally on 7 cilium_host 192.168.0.163:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 7 cilium_host 192.168.0.163:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 8 cilium_net [fe80::60a4:12ff:fef3:c0d2%4]:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 9 cilium_host [fe80::106b:50ff:fed2:9103%5]:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 10 cilium_vxlan [fe80::74ff:c7ff:fe78:92cd%6]:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 11 lxc_health [fe80::45a:27ff:fe17:b278%8]:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 12 lxc65538f2cae66 [fe80::5064:84ff:febc:d597%10]:123 Sep 12 17:44:18.995945 ntpd[1870]: 12 Sep 17:44:18 ntpd[1870]: Listen normally on 13 lxc77d67b5d9e83 [fe80::44f1:5bff:fe05:346e%12]:123 Sep 12 17:44:18.995238 ntpd[1870]: Listen normally on 8 cilium_net [fe80::60a4:12ff:fef3:c0d2%4]:123 Sep 12 17:44:18.995294 ntpd[1870]: Listen normally on 9 cilium_host [fe80::106b:50ff:fed2:9103%5]:123 Sep 12 17:44:18.995336 ntpd[1870]: Listen normally on 10 cilium_vxlan [fe80::74ff:c7ff:fe78:92cd%6]:123 Sep 12 17:44:18.995373 ntpd[1870]: Listen normally on 11 lxc_health [fe80::45a:27ff:fe17:b278%8]:123 Sep 12 17:44:18.995409 ntpd[1870]: Listen normally on 12 lxc65538f2cae66 [fe80::5064:84ff:febc:d597%10]:123 Sep 12 17:44:18.995485 ntpd[1870]: Listen normally on 13 lxc77d67b5d9e83 [fe80::44f1:5bff:fe05:346e%12]:123 Sep 12 17:44:19.562762 containerd[1948]: time="2025-09-12T17:44:19.558827756Z" level=info msg="connecting to shim f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78" address="unix:///run/containerd/s/6604f79b46003ec0ef73383ae7d554298a01f9cb228b70af6b05fe16ef40220c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:44:19.571437 containerd[1948]: time="2025-09-12T17:44:19.570730928Z" level=info msg="connecting to shim 027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c" address="unix:///run/containerd/s/1866f1a62eaa208a814ce0b702f4e7382a7610873e784c99ad5295f57031e23d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:44:19.623913 systemd[1]: Started cri-containerd-027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c.scope - libcontainer container 027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c. Sep 12 17:44:19.657778 systemd[1]: Started cri-containerd-f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78.scope - libcontainer container f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78. Sep 12 17:44:19.764712 containerd[1948]: time="2025-09-12T17:44:19.764659237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qtkmb,Uid:0085cfc4-5224-4ea5-9bdc-7849ca5ce3a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c\"" Sep 12 17:44:19.771879 containerd[1948]: time="2025-09-12T17:44:19.771841415Z" level=info msg="CreateContainer within sandbox \"027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:44:19.777447 containerd[1948]: time="2025-09-12T17:44:19.777385507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nsl6v,Uid:0b62f2eb-8a8b-461a-9908-800d8abb5e81,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78\"" Sep 12 17:44:19.782913 containerd[1948]: time="2025-09-12T17:44:19.782870468Z" level=info msg="CreateContainer within sandbox \"f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:44:19.803628 containerd[1948]: time="2025-09-12T17:44:19.803584179Z" level=info msg="Container b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:19.806238 containerd[1948]: time="2025-09-12T17:44:19.806194076Z" level=info msg="Container ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:44:19.818589 containerd[1948]: time="2025-09-12T17:44:19.818534299Z" level=info msg="CreateContainer within sandbox \"027f335071bc8b1241042a4ce60f3c20e7c09bb0d9bb90e763f3a4d82b64d18c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d\"" Sep 12 17:44:19.819194 containerd[1948]: time="2025-09-12T17:44:19.819002755Z" level=info msg="StartContainer for \"b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d\"" Sep 12 17:44:19.819775 containerd[1948]: time="2025-09-12T17:44:19.819742182Z" level=info msg="connecting to shim b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d" address="unix:///run/containerd/s/1866f1a62eaa208a814ce0b702f4e7382a7610873e784c99ad5295f57031e23d" protocol=ttrpc version=3 Sep 12 17:44:19.822744 containerd[1948]: time="2025-09-12T17:44:19.822714752Z" level=info msg="CreateContainer within sandbox \"f8e9658154bb19cdba97553e8807f1763a2feb011b6c989fdddc436932bc2b78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1\"" Sep 12 17:44:19.823282 containerd[1948]: time="2025-09-12T17:44:19.823258198Z" level=info msg="StartContainer for \"ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1\"" Sep 12 17:44:19.823977 containerd[1948]: time="2025-09-12T17:44:19.823925987Z" level=info msg="connecting to shim ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1" address="unix:///run/containerd/s/6604f79b46003ec0ef73383ae7d554298a01f9cb228b70af6b05fe16ef40220c" protocol=ttrpc version=3 Sep 12 17:44:19.844742 systemd[1]: Started cri-containerd-b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d.scope - libcontainer container b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d. Sep 12 17:44:19.854661 systemd[1]: Started cri-containerd-ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1.scope - libcontainer container ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1. Sep 12 17:44:19.911743 containerd[1948]: time="2025-09-12T17:44:19.911700200Z" level=info msg="StartContainer for \"b82e1b66874c3ad337b44b5292964946774ae8cae04be615969b0a205485574d\" returns successfully" Sep 12 17:44:19.918012 containerd[1948]: time="2025-09-12T17:44:19.917933386Z" level=info msg="StartContainer for \"ec65fc362975f8d72188459ede464275303d494b0f85d2d6c29b8fa256651ef1\" returns successfully" Sep 12 17:44:20.139084 kubelet[3314]: I0912 17:44:20.138828 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qtkmb" podStartSLOduration=25.137951669 podStartE2EDuration="25.137951669s" podCreationTimestamp="2025-09-12 17:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:44:20.137680542 +0000 UTC m=+31.441604179" watchObservedRunningTime="2025-09-12 17:44:20.137951669 +0000 UTC m=+31.441875296" Sep 12 17:44:20.163229 kubelet[3314]: I0912 17:44:20.163171 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nsl6v" podStartSLOduration=25.163153231 podStartE2EDuration="25.163153231s" podCreationTimestamp="2025-09-12 17:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:44:20.161806779 +0000 UTC m=+31.465730405" watchObservedRunningTime="2025-09-12 17:44:20.163153231 +0000 UTC m=+31.467076848" Sep 12 17:44:20.544596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664202228.mount: Deactivated successfully. Sep 12 17:44:31.017249 systemd[1]: Started sshd@9-172.31.16.83:22-139.178.68.195:32882.service - OpenSSH per-connection server daemon (139.178.68.195:32882). Sep 12 17:44:31.216671 sshd[4796]: Accepted publickey for core from 139.178.68.195 port 32882 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:31.219033 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:31.227478 systemd-logind[1901]: New session 10 of user core. Sep 12 17:44:31.236636 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:44:32.057983 sshd[4799]: Connection closed by 139.178.68.195 port 32882 Sep 12 17:44:32.058826 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:32.065401 systemd[1]: sshd@9-172.31.16.83:22-139.178.68.195:32882.service: Deactivated successfully. Sep 12 17:44:32.067868 systemd-logind[1901]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:44:32.072814 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:44:32.076867 systemd-logind[1901]: Removed session 10. Sep 12 17:44:37.094548 systemd[1]: Started sshd@10-172.31.16.83:22-139.178.68.195:32892.service - OpenSSH per-connection server daemon (139.178.68.195:32892). Sep 12 17:44:37.280246 sshd[4811]: Accepted publickey for core from 139.178.68.195 port 32892 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:37.281691 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:37.286478 systemd-logind[1901]: New session 11 of user core. Sep 12 17:44:37.293843 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:44:37.497714 sshd[4814]: Connection closed by 139.178.68.195 port 32892 Sep 12 17:44:37.498258 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:37.502161 systemd[1]: sshd@10-172.31.16.83:22-139.178.68.195:32892.service: Deactivated successfully. Sep 12 17:44:37.504686 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:44:37.505996 systemd-logind[1901]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:44:37.507646 systemd-logind[1901]: Removed session 11. Sep 12 17:44:42.531534 systemd[1]: Started sshd@11-172.31.16.83:22-139.178.68.195:56940.service - OpenSSH per-connection server daemon (139.178.68.195:56940). Sep 12 17:44:42.696845 sshd[4830]: Accepted publickey for core from 139.178.68.195 port 56940 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:42.698330 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:42.704275 systemd-logind[1901]: New session 12 of user core. Sep 12 17:44:42.712686 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:44:42.907671 sshd[4833]: Connection closed by 139.178.68.195 port 56940 Sep 12 17:44:42.909634 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:42.913612 systemd-logind[1901]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:44:42.914388 systemd[1]: sshd@11-172.31.16.83:22-139.178.68.195:56940.service: Deactivated successfully. Sep 12 17:44:42.916713 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:44:42.918741 systemd-logind[1901]: Removed session 12. Sep 12 17:44:47.941886 systemd[1]: Started sshd@12-172.31.16.83:22-139.178.68.195:56952.service - OpenSSH per-connection server daemon (139.178.68.195:56952). Sep 12 17:44:48.114304 sshd[4846]: Accepted publickey for core from 139.178.68.195 port 56952 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:48.115838 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:48.121327 systemd-logind[1901]: New session 13 of user core. Sep 12 17:44:48.130672 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:44:48.328488 sshd[4849]: Connection closed by 139.178.68.195 port 56952 Sep 12 17:44:48.330545 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:48.334996 systemd[1]: sshd@12-172.31.16.83:22-139.178.68.195:56952.service: Deactivated successfully. Sep 12 17:44:48.338785 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:44:48.340886 systemd-logind[1901]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:44:48.343201 systemd-logind[1901]: Removed session 13. Sep 12 17:44:48.360361 systemd[1]: Started sshd@13-172.31.16.83:22-139.178.68.195:56964.service - OpenSSH per-connection server daemon (139.178.68.195:56964). Sep 12 17:44:48.525886 sshd[4862]: Accepted publickey for core from 139.178.68.195 port 56964 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:48.527325 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:48.532472 systemd-logind[1901]: New session 14 of user core. Sep 12 17:44:48.551669 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:44:48.804016 sshd[4865]: Connection closed by 139.178.68.195 port 56964 Sep 12 17:44:48.806635 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:48.816016 systemd[1]: sshd@13-172.31.16.83:22-139.178.68.195:56964.service: Deactivated successfully. Sep 12 17:44:48.821824 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:44:48.826486 systemd-logind[1901]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:44:48.850738 systemd[1]: Started sshd@14-172.31.16.83:22-139.178.68.195:56974.service - OpenSSH per-connection server daemon (139.178.68.195:56974). Sep 12 17:44:48.854197 systemd-logind[1901]: Removed session 14. Sep 12 17:44:49.034872 sshd[4876]: Accepted publickey for core from 139.178.68.195 port 56974 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:49.036475 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:49.041490 systemd-logind[1901]: New session 15 of user core. Sep 12 17:44:49.045684 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:44:49.266369 sshd[4881]: Connection closed by 139.178.68.195 port 56974 Sep 12 17:44:49.266948 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:49.271356 systemd-logind[1901]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:44:49.272498 systemd[1]: sshd@14-172.31.16.83:22-139.178.68.195:56974.service: Deactivated successfully. Sep 12 17:44:49.274245 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:44:49.276278 systemd-logind[1901]: Removed session 15. Sep 12 17:44:54.302797 systemd[1]: Started sshd@15-172.31.16.83:22-139.178.68.195:55234.service - OpenSSH per-connection server daemon (139.178.68.195:55234). Sep 12 17:44:54.478533 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 55234 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:54.480293 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:54.485483 systemd-logind[1901]: New session 16 of user core. Sep 12 17:44:54.490601 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:44:54.686559 sshd[4898]: Connection closed by 139.178.68.195 port 55234 Sep 12 17:44:54.687120 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:54.693731 systemd-logind[1901]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:44:54.693905 systemd[1]: sshd@15-172.31.16.83:22-139.178.68.195:55234.service: Deactivated successfully. Sep 12 17:44:54.696906 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:44:54.699080 systemd-logind[1901]: Removed session 16. Sep 12 17:44:59.720829 systemd[1]: Started sshd@16-172.31.16.83:22-139.178.68.195:55240.service - OpenSSH per-connection server daemon (139.178.68.195:55240). Sep 12 17:44:59.892462 sshd[4911]: Accepted publickey for core from 139.178.68.195 port 55240 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:44:59.894048 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:59.899490 systemd-logind[1901]: New session 17 of user core. Sep 12 17:44:59.906650 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:45:00.258724 sshd[4914]: Connection closed by 139.178.68.195 port 55240 Sep 12 17:45:00.260495 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:00.266884 systemd[1]: sshd@16-172.31.16.83:22-139.178.68.195:55240.service: Deactivated successfully. Sep 12 17:45:00.270365 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:45:00.272248 systemd-logind[1901]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:45:00.274386 systemd-logind[1901]: Removed session 17. Sep 12 17:45:00.303686 systemd[1]: Started sshd@17-172.31.16.83:22-139.178.68.195:59040.service - OpenSSH per-connection server daemon (139.178.68.195:59040). Sep 12 17:45:00.540055 sshd[4926]: Accepted publickey for core from 139.178.68.195 port 59040 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:00.543634 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:00.555398 systemd-logind[1901]: New session 18 of user core. Sep 12 17:45:00.562888 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:45:01.447001 sshd[4929]: Connection closed by 139.178.68.195 port 59040 Sep 12 17:45:01.448318 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:01.537937 systemd[1]: Started sshd@18-172.31.16.83:22-139.178.68.195:59044.service - OpenSSH per-connection server daemon (139.178.68.195:59044). Sep 12 17:45:01.558603 systemd[1]: sshd@17-172.31.16.83:22-139.178.68.195:59040.service: Deactivated successfully. Sep 12 17:45:01.566666 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:45:01.569822 systemd-logind[1901]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:45:01.587357 systemd-logind[1901]: Removed session 18. Sep 12 17:45:01.886202 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 59044 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:01.899305 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:01.939136 systemd-logind[1901]: New session 19 of user core. Sep 12 17:45:01.946942 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:45:03.435451 sshd[4942]: Connection closed by 139.178.68.195 port 59044 Sep 12 17:45:03.436193 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:03.454244 systemd-logind[1901]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:45:03.467661 systemd[1]: sshd@18-172.31.16.83:22-139.178.68.195:59044.service: Deactivated successfully. Sep 12 17:45:03.481249 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:45:03.508358 systemd-logind[1901]: Removed session 19. Sep 12 17:45:03.511139 systemd[1]: Started sshd@19-172.31.16.83:22-139.178.68.195:59052.service - OpenSSH per-connection server daemon (139.178.68.195:59052). Sep 12 17:45:03.696116 sshd[4959]: Accepted publickey for core from 139.178.68.195 port 59052 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:03.701257 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:03.715549 systemd-logind[1901]: New session 20 of user core. Sep 12 17:45:03.724664 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:45:04.130102 sshd[4962]: Connection closed by 139.178.68.195 port 59052 Sep 12 17:45:04.132224 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:04.136029 systemd-logind[1901]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:45:04.136701 systemd[1]: sshd@19-172.31.16.83:22-139.178.68.195:59052.service: Deactivated successfully. Sep 12 17:45:04.139284 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:45:04.141405 systemd-logind[1901]: Removed session 20. Sep 12 17:45:04.164448 systemd[1]: Started sshd@20-172.31.16.83:22-139.178.68.195:59064.service - OpenSSH per-connection server daemon (139.178.68.195:59064). Sep 12 17:45:04.338573 sshd[4972]: Accepted publickey for core from 139.178.68.195 port 59064 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:04.340174 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:04.346274 systemd-logind[1901]: New session 21 of user core. Sep 12 17:45:04.350675 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:45:04.548513 sshd[4975]: Connection closed by 139.178.68.195 port 59064 Sep 12 17:45:04.546975 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:04.554165 systemd-logind[1901]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:45:04.554737 systemd[1]: sshd@20-172.31.16.83:22-139.178.68.195:59064.service: Deactivated successfully. Sep 12 17:45:04.557347 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:45:04.560286 systemd-logind[1901]: Removed session 21. Sep 12 17:45:09.584307 systemd[1]: Started sshd@21-172.31.16.83:22-139.178.68.195:59072.service - OpenSSH per-connection server daemon (139.178.68.195:59072). Sep 12 17:45:09.752219 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 59072 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:09.753973 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:09.760492 systemd-logind[1901]: New session 22 of user core. Sep 12 17:45:09.765650 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:45:09.979694 sshd[4992]: Connection closed by 139.178.68.195 port 59072 Sep 12 17:45:09.980750 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:09.987239 systemd-logind[1901]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:45:09.987389 systemd[1]: sshd@21-172.31.16.83:22-139.178.68.195:59072.service: Deactivated successfully. Sep 12 17:45:09.990531 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:45:09.993875 systemd-logind[1901]: Removed session 22. Sep 12 17:45:15.041321 systemd[1]: Started sshd@22-172.31.16.83:22-139.178.68.195:57452.service - OpenSSH per-connection server daemon (139.178.68.195:57452). Sep 12 17:45:15.253223 sshd[5006]: Accepted publickey for core from 139.178.68.195 port 57452 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:15.254691 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:15.260770 systemd-logind[1901]: New session 23 of user core. Sep 12 17:45:15.266679 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:45:15.456246 sshd[5009]: Connection closed by 139.178.68.195 port 57452 Sep 12 17:45:15.457014 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:15.462243 systemd[1]: sshd@22-172.31.16.83:22-139.178.68.195:57452.service: Deactivated successfully. Sep 12 17:45:15.465188 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:45:15.466367 systemd-logind[1901]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:45:15.468288 systemd-logind[1901]: Removed session 23. Sep 12 17:45:20.497890 systemd[1]: Started sshd@23-172.31.16.83:22-139.178.68.195:59696.service - OpenSSH per-connection server daemon (139.178.68.195:59696). Sep 12 17:45:20.669472 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 59696 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:20.670646 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:20.677488 systemd-logind[1901]: New session 24 of user core. Sep 12 17:45:20.684760 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:45:20.894476 sshd[5024]: Connection closed by 139.178.68.195 port 59696 Sep 12 17:45:20.895212 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:20.900028 systemd[1]: sshd@23-172.31.16.83:22-139.178.68.195:59696.service: Deactivated successfully. Sep 12 17:45:20.902666 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:45:20.904220 systemd-logind[1901]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:45:20.906554 systemd-logind[1901]: Removed session 24. Sep 12 17:45:20.926902 systemd[1]: Started sshd@24-172.31.16.83:22-139.178.68.195:59704.service - OpenSSH per-connection server daemon (139.178.68.195:59704). Sep 12 17:45:21.128815 sshd[5036]: Accepted publickey for core from 139.178.68.195 port 59704 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:21.130294 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:21.137076 systemd-logind[1901]: New session 25 of user core. Sep 12 17:45:21.147842 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:45:22.623842 containerd[1948]: time="2025-09-12T17:45:22.623690931Z" level=info msg="StopContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" with timeout 30 (s)" Sep 12 17:45:22.624885 containerd[1948]: time="2025-09-12T17:45:22.624846046Z" level=info msg="Stop container \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" with signal terminated" Sep 12 17:45:22.684790 systemd[1]: cri-containerd-b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b.scope: Deactivated successfully. Sep 12 17:45:22.687772 containerd[1948]: time="2025-09-12T17:45:22.687611212Z" level=info msg="received exit event container_id:\"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" id:\"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" pid:4057 exited_at:{seconds:1757699122 nanos:687029330}" Sep 12 17:45:22.695511 containerd[1948]: time="2025-09-12T17:45:22.695437567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" id:\"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" pid:4057 exited_at:{seconds:1757699122 nanos:687029330}" Sep 12 17:45:22.711021 containerd[1948]: time="2025-09-12T17:45:22.710765937Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:45:22.717263 containerd[1948]: time="2025-09-12T17:45:22.717198563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" id:\"35d73df6849f5f00db7e1c5ac912de981c5100e7286c9a09be1e0a3342a16cf7\" pid:5067 exited_at:{seconds:1757699122 nanos:709365551}" Sep 12 17:45:22.725837 containerd[1948]: time="2025-09-12T17:45:22.725614278Z" level=info msg="StopContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" with timeout 2 (s)" Sep 12 17:45:22.727907 containerd[1948]: time="2025-09-12T17:45:22.727659888Z" level=info msg="Stop container \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" with signal terminated" Sep 12 17:45:22.740754 systemd-networkd[1817]: lxc_health: Link DOWN Sep 12 17:45:22.740764 systemd-networkd[1817]: lxc_health: Lost carrier Sep 12 17:45:22.766686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b-rootfs.mount: Deactivated successfully. Sep 12 17:45:22.769382 systemd[1]: cri-containerd-7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2.scope: Deactivated successfully. Sep 12 17:45:22.769882 systemd[1]: cri-containerd-7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2.scope: Consumed 8.240s CPU time, 234.3M memory peak, 106.3M read from disk, 13.3M written to disk. Sep 12 17:45:22.771896 containerd[1948]: time="2025-09-12T17:45:22.771824836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" pid:4130 exited_at:{seconds:1757699122 nanos:771380043}" Sep 12 17:45:22.772144 containerd[1948]: time="2025-09-12T17:45:22.771917173Z" level=info msg="received exit event container_id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" id:\"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" pid:4130 exited_at:{seconds:1757699122 nanos:771380043}" Sep 12 17:45:22.789134 containerd[1948]: time="2025-09-12T17:45:22.789096362Z" level=info msg="StopContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" returns successfully" Sep 12 17:45:22.790389 containerd[1948]: time="2025-09-12T17:45:22.790349156Z" level=info msg="StopPodSandbox for \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\"" Sep 12 17:45:22.790562 containerd[1948]: time="2025-09-12T17:45:22.790457695Z" level=info msg="Container to stop \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.807615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2-rootfs.mount: Deactivated successfully. Sep 12 17:45:22.808866 systemd[1]: cri-containerd-3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48.scope: Deactivated successfully. Sep 12 17:45:22.809197 containerd[1948]: time="2025-09-12T17:45:22.808736208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" id:\"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" pid:3706 exit_status:137 exited_at:{seconds:1757699122 nanos:807951955}" Sep 12 17:45:22.820331 containerd[1948]: time="2025-09-12T17:45:22.820211462Z" level=info msg="StopContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" returns successfully" Sep 12 17:45:22.820758 containerd[1948]: time="2025-09-12T17:45:22.820717072Z" level=info msg="StopPodSandbox for \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\"" Sep 12 17:45:22.820865 containerd[1948]: time="2025-09-12T17:45:22.820790020Z" level=info msg="Container to stop \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.820865 containerd[1948]: time="2025-09-12T17:45:22.820807313Z" level=info msg="Container to stop \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.820865 containerd[1948]: time="2025-09-12T17:45:22.820821051Z" level=info msg="Container to stop \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.820865 containerd[1948]: time="2025-09-12T17:45:22.820834599Z" level=info msg="Container to stop \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.820865 containerd[1948]: time="2025-09-12T17:45:22.820846079Z" level=info msg="Container to stop \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:45:22.834643 systemd[1]: cri-containerd-326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c.scope: Deactivated successfully. Sep 12 17:45:22.878805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48-rootfs.mount: Deactivated successfully. Sep 12 17:45:22.883158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c-rootfs.mount: Deactivated successfully. Sep 12 17:45:22.895107 containerd[1948]: time="2025-09-12T17:45:22.894904092Z" level=info msg="shim disconnected" id=3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48 namespace=k8s.io Sep 12 17:45:22.895107 containerd[1948]: time="2025-09-12T17:45:22.895098424Z" level=warning msg="cleaning up after shim disconnected" id=3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48 namespace=k8s.io Sep 12 17:45:22.902649 containerd[1948]: time="2025-09-12T17:45:22.895109935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:45:22.902837 containerd[1948]: time="2025-09-12T17:45:22.895464225Z" level=info msg="shim disconnected" id=326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c namespace=k8s.io Sep 12 17:45:22.902837 containerd[1948]: time="2025-09-12T17:45:22.902737245Z" level=warning msg="cleaning up after shim disconnected" id=326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c namespace=k8s.io Sep 12 17:45:22.902837 containerd[1948]: time="2025-09-12T17:45:22.902746675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:45:22.947444 containerd[1948]: time="2025-09-12T17:45:22.947267683Z" level=info msg="received exit event sandbox_id:\"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" exit_status:137 exited_at:{seconds:1757699122 nanos:841047439}" Sep 12 17:45:22.953024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c-shm.mount: Deactivated successfully. Sep 12 17:45:22.957530 containerd[1948]: time="2025-09-12T17:45:22.957068229Z" level=info msg="received exit event sandbox_id:\"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" exit_status:137 exited_at:{seconds:1757699122 nanos:807951955}" Sep 12 17:45:22.957723 containerd[1948]: time="2025-09-12T17:45:22.957674428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" id:\"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" pid:3645 exit_status:137 exited_at:{seconds:1757699122 nanos:841047439}" Sep 12 17:45:22.958711 containerd[1948]: time="2025-09-12T17:45:22.958580366Z" level=info msg="TearDown network for sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" successfully" Sep 12 17:45:22.958711 containerd[1948]: time="2025-09-12T17:45:22.958606940Z" level=info msg="StopPodSandbox for \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" returns successfully" Sep 12 17:45:22.962080 containerd[1948]: time="2025-09-12T17:45:22.961931392Z" level=info msg="TearDown network for sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" successfully" Sep 12 17:45:22.962080 containerd[1948]: time="2025-09-12T17:45:22.961968512Z" level=info msg="StopPodSandbox for \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" returns successfully" Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104467 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-lib-modules\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104530 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66qks\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-kube-api-access-66qks\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104560 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-etc-cni-netd\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104585 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-bpf-maps\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104606 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-hostproc\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105005 kubelet[3314]: I0912 17:45:23.104632 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-hubble-tls\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104664 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-kernel\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104684 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-net\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104709 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bad6648-0b92-409c-b7be-09b5f6adab99-clustermesh-secrets\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104732 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-cgroup\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104757 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-cilium-config-path\") pod \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\" (UID: \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\") " Sep 12 17:45:23.105745 kubelet[3314]: I0912 17:45:23.104782 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crhzg\" (UniqueName: \"kubernetes.io/projected/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-kube-api-access-crhzg\") pod \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\" (UID: \"856d4792-966f-4ba1-a20f-e8d9d4a9bdb0\") " Sep 12 17:45:23.106018 kubelet[3314]: I0912 17:45:23.104804 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cni-path\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.106018 kubelet[3314]: I0912 17:45:23.104826 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-run\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.106018 kubelet[3314]: I0912 17:45:23.104847 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-xtables-lock\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.106018 kubelet[3314]: I0912 17:45:23.104873 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-config-path\") pod \"9bad6648-0b92-409c-b7be-09b5f6adab99\" (UID: \"9bad6648-0b92-409c-b7be-09b5f6adab99\") " Sep 12 17:45:23.106178 kubelet[3314]: I0912 17:45:23.104555 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.106449 kubelet[3314]: I0912 17:45:23.106110 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.106449 kubelet[3314]: I0912 17:45:23.106286 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.106796 kubelet[3314]: I0912 17:45:23.106772 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-hostproc" (OuterVolumeSpecName: "hostproc") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.109910 kubelet[3314]: I0912 17:45:23.109845 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.113329 kubelet[3314]: I0912 17:45:23.110401 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.114431 kubelet[3314]: I0912 17:45:23.113502 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:45:23.114431 kubelet[3314]: I0912 17:45:23.114272 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-kube-api-access-66qks" (OuterVolumeSpecName: "kube-api-access-66qks") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "kube-api-access-66qks". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:45:23.115056 kubelet[3314]: I0912 17:45:23.115029 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.115056 kubelet[3314]: I0912 17:45:23.115036 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cni-path" (OuterVolumeSpecName: "cni-path") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.115321 kubelet[3314]: I0912 17:45:23.115300 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.115623 kubelet[3314]: I0912 17:45:23.115540 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:45:23.122732 kubelet[3314]: I0912 17:45:23.122687 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:45:23.124983 kubelet[3314]: I0912 17:45:23.124940 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-kube-api-access-crhzg" (OuterVolumeSpecName: "kube-api-access-crhzg") pod "856d4792-966f-4ba1-a20f-e8d9d4a9bdb0" (UID: "856d4792-966f-4ba1-a20f-e8d9d4a9bdb0"). InnerVolumeSpecName "kube-api-access-crhzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:45:23.126021 kubelet[3314]: I0912 17:45:23.125877 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bad6648-0b92-409c-b7be-09b5f6adab99-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9bad6648-0b92-409c-b7be-09b5f6adab99" (UID: "9bad6648-0b92-409c-b7be-09b5f6adab99"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:45:23.126162 kubelet[3314]: I0912 17:45:23.126044 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "856d4792-966f-4ba1-a20f-e8d9d4a9bdb0" (UID: "856d4792-966f-4ba1-a20f-e8d9d4a9bdb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205183 3314 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cni-path\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205226 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-run\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205238 3314 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-xtables-lock\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205248 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-config-path\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205262 3314 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-lib-modules\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205270 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66qks\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-kube-api-access-66qks\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205278 3314 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-etc-cni-netd\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205435 kubelet[3314]: I0912 17:45:23.205287 3314 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-bpf-maps\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205294 3314 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-hostproc\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205301 3314 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bad6648-0b92-409c-b7be-09b5f6adab99-hubble-tls\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205310 3314 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-kernel\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205318 3314 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-host-proc-sys-net\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205326 3314 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bad6648-0b92-409c-b7be-09b5f6adab99-clustermesh-secrets\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205333 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bad6648-0b92-409c-b7be-09b5f6adab99-cilium-cgroup\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205340 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-cilium-config-path\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.205796 kubelet[3314]: I0912 17:45:23.205347 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crhzg\" (UniqueName: \"kubernetes.io/projected/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0-kube-api-access-crhzg\") on node \"ip-172-31-16-83\" DevicePath \"\"" Sep 12 17:45:23.310138 kubelet[3314]: I0912 17:45:23.310082 3314 scope.go:117] "RemoveContainer" containerID="b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b" Sep 12 17:45:23.312052 containerd[1948]: time="2025-09-12T17:45:23.311912074Z" level=info msg="RemoveContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\"" Sep 12 17:45:23.321248 systemd[1]: Removed slice kubepods-besteffort-pod856d4792_966f_4ba1_a20f_e8d9d4a9bdb0.slice - libcontainer container kubepods-besteffort-pod856d4792_966f_4ba1_a20f_e8d9d4a9bdb0.slice. Sep 12 17:45:23.323288 containerd[1948]: time="2025-09-12T17:45:23.323228879Z" level=info msg="RemoveContainer for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" returns successfully" Sep 12 17:45:23.323856 kubelet[3314]: I0912 17:45:23.323693 3314 scope.go:117] "RemoveContainer" containerID="b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b" Sep 12 17:45:23.324913 containerd[1948]: time="2025-09-12T17:45:23.324273694Z" level=error msg="ContainerStatus for \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\": not found" Sep 12 17:45:23.327464 kubelet[3314]: E0912 17:45:23.326525 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\": not found" containerID="b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b" Sep 12 17:45:23.327464 kubelet[3314]: I0912 17:45:23.326568 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b"} err="failed to get container status \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1616a223e1acead1806237fd899cb889c7fe4884b3a9373f1a05dc05db64e8b\": not found" Sep 12 17:45:23.328164 kubelet[3314]: I0912 17:45:23.328067 3314 scope.go:117] "RemoveContainer" containerID="7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2" Sep 12 17:45:23.337250 containerd[1948]: time="2025-09-12T17:45:23.337201918Z" level=info msg="RemoveContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\"" Sep 12 17:45:23.340278 systemd[1]: Removed slice kubepods-burstable-pod9bad6648_0b92_409c_b7be_09b5f6adab99.slice - libcontainer container kubepods-burstable-pod9bad6648_0b92_409c_b7be_09b5f6adab99.slice. Sep 12 17:45:23.340630 systemd[1]: kubepods-burstable-pod9bad6648_0b92_409c_b7be_09b5f6adab99.slice: Consumed 8.355s CPU time, 234.6M memory peak, 107.1M read from disk, 13.3M written to disk. Sep 12 17:45:23.352437 containerd[1948]: time="2025-09-12T17:45:23.352311802Z" level=info msg="RemoveContainer for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" returns successfully" Sep 12 17:45:23.353041 kubelet[3314]: I0912 17:45:23.353016 3314 scope.go:117] "RemoveContainer" containerID="d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7" Sep 12 17:45:23.360269 containerd[1948]: time="2025-09-12T17:45:23.360231392Z" level=info msg="RemoveContainer for \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\"" Sep 12 17:45:23.377535 containerd[1948]: time="2025-09-12T17:45:23.377489221Z" level=info msg="RemoveContainer for \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" returns successfully" Sep 12 17:45:23.377840 kubelet[3314]: I0912 17:45:23.377759 3314 scope.go:117] "RemoveContainer" containerID="bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3" Sep 12 17:45:23.380463 containerd[1948]: time="2025-09-12T17:45:23.380054063Z" level=info msg="RemoveContainer for \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\"" Sep 12 17:45:23.386832 containerd[1948]: time="2025-09-12T17:45:23.386783675Z" level=info msg="RemoveContainer for \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" returns successfully" Sep 12 17:45:23.387349 kubelet[3314]: I0912 17:45:23.387189 3314 scope.go:117] "RemoveContainer" containerID="fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291" Sep 12 17:45:23.389292 containerd[1948]: time="2025-09-12T17:45:23.389253220Z" level=info msg="RemoveContainer for \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\"" Sep 12 17:45:23.395277 containerd[1948]: time="2025-09-12T17:45:23.395240694Z" level=info msg="RemoveContainer for \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" returns successfully" Sep 12 17:45:23.395760 kubelet[3314]: I0912 17:45:23.395658 3314 scope.go:117] "RemoveContainer" containerID="88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174" Sep 12 17:45:23.397265 containerd[1948]: time="2025-09-12T17:45:23.397241714Z" level=info msg="RemoveContainer for \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\"" Sep 12 17:45:23.402974 containerd[1948]: time="2025-09-12T17:45:23.402926698Z" level=info msg="RemoveContainer for \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" returns successfully" Sep 12 17:45:23.403372 kubelet[3314]: I0912 17:45:23.403330 3314 scope.go:117] "RemoveContainer" containerID="7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2" Sep 12 17:45:23.403990 containerd[1948]: time="2025-09-12T17:45:23.403946131Z" level=error msg="ContainerStatus for \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\": not found" Sep 12 17:45:23.404136 kubelet[3314]: E0912 17:45:23.404103 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\": not found" containerID="7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2" Sep 12 17:45:23.404208 kubelet[3314]: I0912 17:45:23.404135 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2"} err="failed to get container status \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7844fb16313015b444295c937ffa3e8e28028b3dac6a8d2ae1c558d6d29712c2\": not found" Sep 12 17:45:23.404254 kubelet[3314]: I0912 17:45:23.404216 3314 scope.go:117] "RemoveContainer" containerID="d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7" Sep 12 17:45:23.404476 containerd[1948]: time="2025-09-12T17:45:23.404407198Z" level=error msg="ContainerStatus for \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\": not found" Sep 12 17:45:23.404646 kubelet[3314]: E0912 17:45:23.404591 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\": not found" containerID="d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7" Sep 12 17:45:23.404722 kubelet[3314]: I0912 17:45:23.404632 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7"} err="failed to get container status \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8a298da47ad9276d2403c15e408244547cdafcf522f0174bc4b151b627b3bd7\": not found" Sep 12 17:45:23.404722 kubelet[3314]: I0912 17:45:23.404678 3314 scope.go:117] "RemoveContainer" containerID="bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3" Sep 12 17:45:23.404950 containerd[1948]: time="2025-09-12T17:45:23.404878320Z" level=error msg="ContainerStatus for \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\": not found" Sep 12 17:45:23.405137 kubelet[3314]: E0912 17:45:23.405109 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\": not found" containerID="bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3" Sep 12 17:45:23.405198 kubelet[3314]: I0912 17:45:23.405147 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3"} err="failed to get container status \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc738cc361a98a58e31247b1a1d159421e6ca5e18edc7a9c69aab5281ba6c7e3\": not found" Sep 12 17:45:23.405198 kubelet[3314]: I0912 17:45:23.405167 3314 scope.go:117] "RemoveContainer" containerID="fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291" Sep 12 17:45:23.405523 containerd[1948]: time="2025-09-12T17:45:23.405490980Z" level=error msg="ContainerStatus for \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\": not found" Sep 12 17:45:23.405679 kubelet[3314]: E0912 17:45:23.405649 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\": not found" containerID="fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291" Sep 12 17:45:23.405737 kubelet[3314]: I0912 17:45:23.405687 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291"} err="failed to get container status \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe2fd6f4dbb881f976f4caeeef207b8433c3cf8b1443fdd5f4e673590c0b1291\": not found" Sep 12 17:45:23.405737 kubelet[3314]: I0912 17:45:23.405705 3314 scope.go:117] "RemoveContainer" containerID="88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174" Sep 12 17:45:23.405929 containerd[1948]: time="2025-09-12T17:45:23.405878673Z" level=error msg="ContainerStatus for \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\": not found" Sep 12 17:45:23.406018 kubelet[3314]: E0912 17:45:23.405991 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\": not found" containerID="88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174" Sep 12 17:45:23.406151 kubelet[3314]: I0912 17:45:23.406121 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174"} err="failed to get container status \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\": rpc error: code = NotFound desc = an error occurred when try to find container \"88bb284cfe4c812ac617ca84f41dbf9c58256803cf9e330b1c6713d02f8f7174\": not found" Sep 12 17:45:23.768951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48-shm.mount: Deactivated successfully. Sep 12 17:45:23.770698 systemd[1]: var-lib-kubelet-pods-856d4792\x2d966f\x2d4ba1\x2da20f\x2de8d9d4a9bdb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrhzg.mount: Deactivated successfully. Sep 12 17:45:23.770799 systemd[1]: var-lib-kubelet-pods-9bad6648\x2d0b92\x2d409c\x2db7be\x2d09b5f6adab99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66qks.mount: Deactivated successfully. Sep 12 17:45:23.770882 systemd[1]: var-lib-kubelet-pods-9bad6648\x2d0b92\x2d409c\x2db7be\x2d09b5f6adab99-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:45:23.770966 systemd[1]: var-lib-kubelet-pods-9bad6648\x2d0b92\x2d409c\x2db7be\x2d09b5f6adab99-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:45:23.989107 kubelet[3314]: E0912 17:45:23.988951 3314 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:45:24.538612 sshd[5039]: Connection closed by 139.178.68.195 port 59704 Sep 12 17:45:24.541209 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:24.546549 systemd[1]: sshd@24-172.31.16.83:22-139.178.68.195:59704.service: Deactivated successfully. Sep 12 17:45:24.549254 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:45:24.550344 systemd-logind[1901]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:45:24.552811 systemd-logind[1901]: Removed session 25. Sep 12 17:45:24.578530 systemd[1]: Started sshd@25-172.31.16.83:22-139.178.68.195:59720.service - OpenSSH per-connection server daemon (139.178.68.195:59720). Sep 12 17:45:24.747854 sshd[5194]: Accepted publickey for core from 139.178.68.195 port 59720 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:24.749470 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:24.756159 systemd-logind[1901]: New session 26 of user core. Sep 12 17:45:24.763798 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:45:24.861003 kubelet[3314]: I0912 17:45:24.860947 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856d4792-966f-4ba1-a20f-e8d9d4a9bdb0" path="/var/lib/kubelet/pods/856d4792-966f-4ba1-a20f-e8d9d4a9bdb0/volumes" Sep 12 17:45:24.861777 kubelet[3314]: I0912 17:45:24.861520 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bad6648-0b92-409c-b7be-09b5f6adab99" path="/var/lib/kubelet/pods/9bad6648-0b92-409c-b7be-09b5f6adab99/volumes" Sep 12 17:45:24.995080 ntpd[1870]: Deleting interface #11 lxc_health, fe80::45a:27ff:fe17:b278%8#123, interface stats: received=0, sent=0, dropped=0, active_time=66 secs Sep 12 17:45:24.995658 ntpd[1870]: 12 Sep 17:45:24 ntpd[1870]: Deleting interface #11 lxc_health, fe80::45a:27ff:fe17:b278%8#123, interface stats: received=0, sent=0, dropped=0, active_time=66 secs Sep 12 17:45:25.330659 sshd[5197]: Connection closed by 139.178.68.195 port 59720 Sep 12 17:45:25.331790 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:25.341143 systemd-logind[1901]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:45:25.341923 systemd[1]: sshd@25-172.31.16.83:22-139.178.68.195:59720.service: Deactivated successfully. Sep 12 17:45:25.347250 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:45:25.367006 systemd-logind[1901]: Removed session 26. Sep 12 17:45:25.367760 systemd[1]: Started sshd@26-172.31.16.83:22-139.178.68.195:59726.service - OpenSSH per-connection server daemon (139.178.68.195:59726). Sep 12 17:45:25.394787 kubelet[3314]: I0912 17:45:25.392349 3314 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bad6648-0b92-409c-b7be-09b5f6adab99" containerName="cilium-agent" Sep 12 17:45:25.394787 kubelet[3314]: I0912 17:45:25.394499 3314 memory_manager.go:355] "RemoveStaleState removing state" podUID="856d4792-966f-4ba1-a20f-e8d9d4a9bdb0" containerName="cilium-operator" Sep 12 17:45:25.420992 systemd[1]: Created slice kubepods-burstable-podd78de637_7c66_4b76_98be_73a1b4edba66.slice - libcontainer container kubepods-burstable-podd78de637_7c66_4b76_98be_73a1b4edba66.slice. Sep 12 17:45:25.521588 kubelet[3314]: I0912 17:45:25.521547 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-host-proc-sys-net\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521588 kubelet[3314]: I0912 17:45:25.521589 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-host-proc-sys-kernel\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521612 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-bpf-maps\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521630 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d78de637-7c66-4b76-98be-73a1b4edba66-cilium-config-path\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521647 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-cilium-cgroup\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521663 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-lib-modules\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521677 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-xtables-lock\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521767 kubelet[3314]: I0912 17:45:25.521691 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-hostproc\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521709 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d78de637-7c66-4b76-98be-73a1b4edba66-clustermesh-secrets\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521723 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d78de637-7c66-4b76-98be-73a1b4edba66-cilium-ipsec-secrets\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521740 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-cilium-run\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521757 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-cni-path\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521773 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d78de637-7c66-4b76-98be-73a1b4edba66-hubble-tls\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.521945 kubelet[3314]: I0912 17:45:25.521790 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrls\" (UniqueName: \"kubernetes.io/projected/d78de637-7c66-4b76-98be-73a1b4edba66-kube-api-access-twrls\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.522092 kubelet[3314]: I0912 17:45:25.521806 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d78de637-7c66-4b76-98be-73a1b4edba66-etc-cni-netd\") pod \"cilium-hcn5n\" (UID: \"d78de637-7c66-4b76-98be-73a1b4edba66\") " pod="kube-system/cilium-hcn5n" Sep 12 17:45:25.565480 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 59726 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:25.567570 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:25.573496 systemd-logind[1901]: New session 27 of user core. Sep 12 17:45:25.582685 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:45:25.700542 sshd[5212]: Connection closed by 139.178.68.195 port 59726 Sep 12 17:45:25.701113 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:25.705542 systemd[1]: sshd@26-172.31.16.83:22-139.178.68.195:59726.service: Deactivated successfully. Sep 12 17:45:25.708939 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:45:25.710223 systemd-logind[1901]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:45:25.713135 systemd-logind[1901]: Removed session 27. Sep 12 17:45:25.729043 containerd[1948]: time="2025-09-12T17:45:25.729000608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcn5n,Uid:d78de637-7c66-4b76-98be-73a1b4edba66,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:25.737791 systemd[1]: Started sshd@27-172.31.16.83:22-139.178.68.195:59738.service - OpenSSH per-connection server daemon (139.178.68.195:59738). Sep 12 17:45:25.764606 containerd[1948]: time="2025-09-12T17:45:25.764546510Z" level=info msg="connecting to shim 15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:45:25.808642 systemd[1]: Started cri-containerd-15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8.scope - libcontainer container 15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8. Sep 12 17:45:25.842463 containerd[1948]: time="2025-09-12T17:45:25.842294306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcn5n,Uid:d78de637-7c66-4b76-98be-73a1b4edba66,Namespace:kube-system,Attempt:0,} returns sandbox id \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\"" Sep 12 17:45:25.847130 containerd[1948]: time="2025-09-12T17:45:25.846027689Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:45:25.858898 containerd[1948]: time="2025-09-12T17:45:25.858861044Z" level=info msg="Container 68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:25.873766 containerd[1948]: time="2025-09-12T17:45:25.873732860Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\"" Sep 12 17:45:25.875558 containerd[1948]: time="2025-09-12T17:45:25.874598677Z" level=info msg="StartContainer for \"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\"" Sep 12 17:45:25.875558 containerd[1948]: time="2025-09-12T17:45:25.875311131Z" level=info msg="connecting to shim 68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" protocol=ttrpc version=3 Sep 12 17:45:25.897629 systemd[1]: Started cri-containerd-68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf.scope - libcontainer container 68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf. Sep 12 17:45:25.929484 sshd[5223]: Accepted publickey for core from 139.178.68.195 port 59738 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:45:25.933687 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:25.944668 containerd[1948]: time="2025-09-12T17:45:25.944551450Z" level=info msg="StartContainer for \"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\" returns successfully" Sep 12 17:45:25.946795 systemd-logind[1901]: New session 28 of user core. Sep 12 17:45:25.951721 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:45:25.964518 systemd[1]: cri-containerd-68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf.scope: Deactivated successfully. Sep 12 17:45:25.964874 systemd[1]: cri-containerd-68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf.scope: Consumed 25ms CPU time, 9.8M memory peak, 3.2M read from disk. Sep 12 17:45:25.971010 containerd[1948]: time="2025-09-12T17:45:25.970944332Z" level=info msg="received exit event container_id:\"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\" id:\"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\" pid:5283 exited_at:{seconds:1757699125 nanos:970463039}" Sep 12 17:45:25.971618 containerd[1948]: time="2025-09-12T17:45:25.971335495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\" id:\"68d124f86cee9ce058e0e31fe9c8461bad3d6f036b329646490875b6a7f443bf\" pid:5283 exited_at:{seconds:1757699125 nanos:970463039}" Sep 12 17:45:26.343919 containerd[1948]: time="2025-09-12T17:45:26.343844282Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:45:26.357738 containerd[1948]: time="2025-09-12T17:45:26.357686559Z" level=info msg="Container 754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:26.369556 containerd[1948]: time="2025-09-12T17:45:26.369507432Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\"" Sep 12 17:45:26.371625 containerd[1948]: time="2025-09-12T17:45:26.370305933Z" level=info msg="StartContainer for \"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\"" Sep 12 17:45:26.371625 containerd[1948]: time="2025-09-12T17:45:26.371386216Z" level=info msg="connecting to shim 754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" protocol=ttrpc version=3 Sep 12 17:45:26.394663 systemd[1]: Started cri-containerd-754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1.scope - libcontainer container 754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1. Sep 12 17:45:26.430887 containerd[1948]: time="2025-09-12T17:45:26.430590535Z" level=info msg="StartContainer for \"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\" returns successfully" Sep 12 17:45:26.445108 systemd[1]: cri-containerd-754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1.scope: Deactivated successfully. Sep 12 17:45:26.449212 systemd[1]: cri-containerd-754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1.scope: Consumed 21ms CPU time, 7.4M memory peak, 2M read from disk. Sep 12 17:45:26.451356 containerd[1948]: time="2025-09-12T17:45:26.451242096Z" level=info msg="received exit event container_id:\"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\" id:\"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\" pid:5338 exited_at:{seconds:1757699126 nanos:445278260}" Sep 12 17:45:26.451874 containerd[1948]: time="2025-09-12T17:45:26.451845004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\" id:\"754f6c44e2bc2f676b132f15c9d66feda102a5184e8d1d13451366d51873e4f1\" pid:5338 exited_at:{seconds:1757699126 nanos:445278260}" Sep 12 17:45:27.347092 containerd[1948]: time="2025-09-12T17:45:27.347046316Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:45:27.368040 containerd[1948]: time="2025-09-12T17:45:27.367213331Z" level=info msg="Container a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:27.374058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695272113.mount: Deactivated successfully. Sep 12 17:45:27.385032 containerd[1948]: time="2025-09-12T17:45:27.384983138Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\"" Sep 12 17:45:27.385674 containerd[1948]: time="2025-09-12T17:45:27.385616167Z" level=info msg="StartContainer for \"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\"" Sep 12 17:45:27.391247 containerd[1948]: time="2025-09-12T17:45:27.391201214Z" level=info msg="connecting to shim a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" protocol=ttrpc version=3 Sep 12 17:45:27.427747 systemd[1]: Started cri-containerd-a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3.scope - libcontainer container a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3. Sep 12 17:45:27.481300 containerd[1948]: time="2025-09-12T17:45:27.481246212Z" level=info msg="StartContainer for \"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\" returns successfully" Sep 12 17:45:27.486402 systemd[1]: cri-containerd-a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3.scope: Deactivated successfully. Sep 12 17:45:27.488591 containerd[1948]: time="2025-09-12T17:45:27.488550257Z" level=info msg="received exit event container_id:\"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\" id:\"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\" pid:5382 exited_at:{seconds:1757699127 nanos:488099661}" Sep 12 17:45:27.488907 containerd[1948]: time="2025-09-12T17:45:27.488729580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\" id:\"a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3\" pid:5382 exited_at:{seconds:1757699127 nanos:488099661}" Sep 12 17:45:27.515757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a495db3b6548ba216272268d1a8f91d114b7f15bc66f873250f516181ceabbd3-rootfs.mount: Deactivated successfully. Sep 12 17:45:28.353194 containerd[1948]: time="2025-09-12T17:45:28.353144574Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:45:28.373312 containerd[1948]: time="2025-09-12T17:45:28.373159874Z" level=info msg="Container e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:28.384857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134615177.mount: Deactivated successfully. Sep 12 17:45:28.388178 containerd[1948]: time="2025-09-12T17:45:28.388118259Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\"" Sep 12 17:45:28.388822 containerd[1948]: time="2025-09-12T17:45:28.388735894Z" level=info msg="StartContainer for \"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\"" Sep 12 17:45:28.390450 containerd[1948]: time="2025-09-12T17:45:28.390385122Z" level=info msg="connecting to shim e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" protocol=ttrpc version=3 Sep 12 17:45:28.418878 systemd[1]: Started cri-containerd-e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453.scope - libcontainer container e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453. Sep 12 17:45:28.454813 systemd[1]: cri-containerd-e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453.scope: Deactivated successfully. Sep 12 17:45:28.458001 containerd[1948]: time="2025-09-12T17:45:28.457964971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\" id:\"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\" pid:5423 exited_at:{seconds:1757699128 nanos:457634702}" Sep 12 17:45:28.459552 containerd[1948]: time="2025-09-12T17:45:28.459016780Z" level=info msg="received exit event container_id:\"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\" id:\"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\" pid:5423 exited_at:{seconds:1757699128 nanos:457634702}" Sep 12 17:45:28.468191 containerd[1948]: time="2025-09-12T17:45:28.468039493Z" level=info msg="StartContainer for \"e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453\" returns successfully" Sep 12 17:45:28.484183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c8ba2ba7e36e370ea170b52edcc0b98cee9b8cfca7ebfdd41f198cb954c453-rootfs.mount: Deactivated successfully. Sep 12 17:45:28.990702 kubelet[3314]: E0912 17:45:28.990660 3314 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:45:29.358235 containerd[1948]: time="2025-09-12T17:45:29.358187563Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:45:29.381488 containerd[1948]: time="2025-09-12T17:45:29.381441740Z" level=info msg="Container 2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:29.404323 containerd[1948]: time="2025-09-12T17:45:29.404270324Z" level=info msg="CreateContainer within sandbox \"15198ecb1c43bdbab114d17e31c9c84c16972d6869f20c77fc3d781b7dc79ee8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\"" Sep 12 17:45:29.405128 containerd[1948]: time="2025-09-12T17:45:29.405099459Z" level=info msg="StartContainer for \"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\"" Sep 12 17:45:29.406332 containerd[1948]: time="2025-09-12T17:45:29.406267022Z" level=info msg="connecting to shim 2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075" address="unix:///run/containerd/s/94b5d4416a74970d9758ef783905117c2f6910929202c6f5f312a69183b4ae5e" protocol=ttrpc version=3 Sep 12 17:45:29.438652 systemd[1]: Started cri-containerd-2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075.scope - libcontainer container 2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075. Sep 12 17:45:29.487329 containerd[1948]: time="2025-09-12T17:45:29.487280697Z" level=info msg="StartContainer for \"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" returns successfully" Sep 12 17:45:29.618407 containerd[1948]: time="2025-09-12T17:45:29.618297263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" id:\"7b82128bb78321abab9e742abb7c1c705cab943047feba32898b0fbdfd4f2188\" pid:5491 exited_at:{seconds:1757699129 nanos:617591225}" Sep 12 17:45:30.312531 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 17:45:31.126245 kubelet[3314]: I0912 17:45:31.126192 3314 setters.go:602] "Node became not ready" node="ip-172-31-16-83" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:45:31Z","lastTransitionTime":"2025-09-12T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:45:32.529100 containerd[1948]: time="2025-09-12T17:45:32.529004697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" id:\"134eb33924829102a093a4100f72e47cdb69e170a5260cd5317e641fc7f68182\" pid:5652 exit_status:1 exited_at:{seconds:1757699132 nanos:527555398}" Sep 12 17:45:33.604624 systemd-networkd[1817]: lxc_health: Link UP Sep 12 17:45:33.608729 (udev-worker)[5978]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:45:33.611683 systemd-networkd[1817]: lxc_health: Gained carrier Sep 12 17:45:33.767125 kubelet[3314]: I0912 17:45:33.766382 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hcn5n" podStartSLOduration=8.7663616 podStartE2EDuration="8.7663616s" podCreationTimestamp="2025-09-12 17:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:45:30.425944205 +0000 UTC m=+101.729867843" watchObservedRunningTime="2025-09-12 17:45:33.7663616 +0000 UTC m=+105.070285226" Sep 12 17:45:34.871815 containerd[1948]: time="2025-09-12T17:45:34.871765073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" id:\"ccbc1824258ea53809676fd52f4527431f872bae5787bbfd471741c32b006930\" pid:6007 exited_at:{seconds:1757699134 nanos:870097414}" Sep 12 17:45:35.217588 systemd-networkd[1817]: lxc_health: Gained IPv6LL Sep 12 17:45:37.061857 containerd[1948]: time="2025-09-12T17:45:37.061813936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" id:\"9125c041446d7e4393227deca2efe599b90178ac068ae0dec0d568c66c8c356d\" pid:6037 exited_at:{seconds:1757699137 nanos:60016609}" Sep 12 17:45:37.995176 ntpd[1870]: Listen normally on 14 lxc_health [fe80::ccf4:83ff:fe44:d441%14]:123 Sep 12 17:45:37.995775 ntpd[1870]: 12 Sep 17:45:37 ntpd[1870]: Listen normally on 14 lxc_health [fe80::ccf4:83ff:fe44:d441%14]:123 Sep 12 17:45:39.179607 containerd[1948]: time="2025-09-12T17:45:39.179554803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2267eb877c4d3280668c81e6af46ef0275ec38b76d20199fa8ad9cd97de7b075\" id:\"fecaaca1538948d53413044f2da52e355ae4855e549d5a53cc05c600adf890b0\" pid:6068 exited_at:{seconds:1757699139 nanos:178657207}" Sep 12 17:45:39.212731 sshd[5303]: Connection closed by 139.178.68.195 port 59738 Sep 12 17:45:39.213849 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:39.218064 systemd[1]: sshd@27-172.31.16.83:22-139.178.68.195:59738.service: Deactivated successfully. Sep 12 17:45:39.220208 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:45:39.222029 systemd-logind[1901]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:45:39.224401 systemd-logind[1901]: Removed session 28. Sep 12 17:45:48.877713 containerd[1948]: time="2025-09-12T17:45:48.877658081Z" level=info msg="StopPodSandbox for \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\"" Sep 12 17:45:48.878256 containerd[1948]: time="2025-09-12T17:45:48.877830621Z" level=info msg="TearDown network for sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" successfully" Sep 12 17:45:48.878256 containerd[1948]: time="2025-09-12T17:45:48.877847329Z" level=info msg="StopPodSandbox for \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" returns successfully" Sep 12 17:45:48.878457 containerd[1948]: time="2025-09-12T17:45:48.878391382Z" level=info msg="RemovePodSandbox for \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\"" Sep 12 17:45:48.878457 containerd[1948]: time="2025-09-12T17:45:48.878437987Z" level=info msg="Forcibly stopping sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\"" Sep 12 17:45:48.878624 containerd[1948]: time="2025-09-12T17:45:48.878547359Z" level=info msg="TearDown network for sandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" successfully" Sep 12 17:45:48.880177 containerd[1948]: time="2025-09-12T17:45:48.880147314Z" level=info msg="Ensure that sandbox 326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c in task-service has been cleanup successfully" Sep 12 17:45:48.886517 containerd[1948]: time="2025-09-12T17:45:48.886466109Z" level=info msg="RemovePodSandbox \"326a0830ce0f8904b752a955028bb69f66e90f70d88f08a7004c94dec4799d6c\" returns successfully" Sep 12 17:45:48.887173 containerd[1948]: time="2025-09-12T17:45:48.886982947Z" level=info msg="StopPodSandbox for \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\"" Sep 12 17:45:48.887173 containerd[1948]: time="2025-09-12T17:45:48.887108336Z" level=info msg="TearDown network for sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" successfully" Sep 12 17:45:48.887173 containerd[1948]: time="2025-09-12T17:45:48.887120287Z" level=info msg="StopPodSandbox for \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" returns successfully" Sep 12 17:45:48.887775 containerd[1948]: time="2025-09-12T17:45:48.887748643Z" level=info msg="RemovePodSandbox for \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\"" Sep 12 17:45:48.887957 containerd[1948]: time="2025-09-12T17:45:48.887904917Z" level=info msg="Forcibly stopping sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\"" Sep 12 17:45:48.888512 containerd[1948]: time="2025-09-12T17:45:48.888016452Z" level=info msg="TearDown network for sandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" successfully" Sep 12 17:45:48.889147 containerd[1948]: time="2025-09-12T17:45:48.889115352Z" level=info msg="Ensure that sandbox 3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48 in task-service has been cleanup successfully" Sep 12 17:45:48.895864 containerd[1948]: time="2025-09-12T17:45:48.895788727Z" level=info msg="RemovePodSandbox \"3dbc080d61fd5972c1e8c5ede8fa45ed1d96fc84b8f41e88216a42fadbc13c48\" returns successfully" Sep 12 17:45:54.321895 systemd[1]: cri-containerd-0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386.scope: Deactivated successfully. Sep 12 17:45:54.323395 systemd[1]: cri-containerd-0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386.scope: Consumed 3.233s CPU time, 73.1M memory peak, 21.2M read from disk. Sep 12 17:45:54.325782 containerd[1948]: time="2025-09-12T17:45:54.325008278Z" level=info msg="received exit event container_id:\"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\" id:\"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\" pid:3164 exit_status:1 exited_at:{seconds:1757699154 nanos:324615025}" Sep 12 17:45:54.325782 containerd[1948]: time="2025-09-12T17:45:54.325268724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\" id:\"0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386\" pid:3164 exit_status:1 exited_at:{seconds:1757699154 nanos:324615025}" Sep 12 17:45:54.353011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386-rootfs.mount: Deactivated successfully. Sep 12 17:45:54.440378 kubelet[3314]: I0912 17:45:54.440255 3314 scope.go:117] "RemoveContainer" containerID="0054d653de612946b72cfa768ad80b2b9c07f9353570ab0bfc4f4830c589e386" Sep 12 17:45:54.443546 containerd[1948]: time="2025-09-12T17:45:54.443506213Z" level=info msg="CreateContainer within sandbox \"4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:45:54.464099 containerd[1948]: time="2025-09-12T17:45:54.463006516Z" level=info msg="Container 2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:45:54.479997 containerd[1948]: time="2025-09-12T17:45:54.479932389Z" level=info msg="CreateContainer within sandbox \"4fd44ad89d1c2d0e5b5d56a09c06b9894a74e1fa531c798340d54ed15d97ea8d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c\"" Sep 12 17:45:54.480700 containerd[1948]: time="2025-09-12T17:45:54.480668295Z" level=info msg="StartContainer for \"2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c\"" Sep 12 17:45:54.482044 containerd[1948]: time="2025-09-12T17:45:54.482011337Z" level=info msg="connecting to shim 2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c" address="unix:///run/containerd/s/67d1065a768eb2f5b4a2b7cbf648734595157a5c9110e2461e87297ad5162f72" protocol=ttrpc version=3 Sep 12 17:45:54.514745 systemd[1]: Started cri-containerd-2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c.scope - libcontainer container 2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c. Sep 12 17:45:54.576004 containerd[1948]: time="2025-09-12T17:45:54.575889014Z" level=info msg="StartContainer for \"2c03f11fcb6a6245e7dfa206f62a284ef341fe0832aaef1096987fa818f8895c\" returns successfully" Sep 12 17:45:59.580001 systemd[1]: cri-containerd-e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d.scope: Deactivated successfully. Sep 12 17:45:59.580270 systemd[1]: cri-containerd-e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d.scope: Consumed 2.200s CPU time, 32.2M memory peak, 13.7M read from disk. Sep 12 17:45:59.582863 containerd[1948]: time="2025-09-12T17:45:59.582831392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\" id:\"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\" pid:3126 exit_status:1 exited_at:{seconds:1757699159 nanos:582485239}" Sep 12 17:45:59.583857 containerd[1948]: time="2025-09-12T17:45:59.582902296Z" level=info msg="received exit event container_id:\"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\" id:\"e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d\" pid:3126 exit_status:1 exited_at:{seconds:1757699159 nanos:582485239}" Sep 12 17:45:59.612776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d-rootfs.mount: Deactivated successfully. Sep 12 17:46:00.459865 kubelet[3314]: I0912 17:46:00.459830 3314 scope.go:117] "RemoveContainer" containerID="e645eee543862f3f6b4a5ef755948d14096c77c582e869b6764a1e5f01c4787d" Sep 12 17:46:00.461903 containerd[1948]: time="2025-09-12T17:46:00.461861138Z" level=info msg="CreateContainer within sandbox \"fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:46:00.479301 containerd[1948]: time="2025-09-12T17:46:00.479248285Z" level=info msg="Container 05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:00.493170 containerd[1948]: time="2025-09-12T17:46:00.493122117Z" level=info msg="CreateContainer within sandbox \"fdab44d708724546a0f05e41dac1241c16d0b12228537517cfc1af8abbfa8237\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86\"" Sep 12 17:46:00.493835 containerd[1948]: time="2025-09-12T17:46:00.493785739Z" level=info msg="StartContainer for \"05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86\"" Sep 12 17:46:00.494923 containerd[1948]: time="2025-09-12T17:46:00.494877730Z" level=info msg="connecting to shim 05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86" address="unix:///run/containerd/s/3de95bab381e5957a5574a303d1912b4a61fd96d57a86c3290b0939591208247" protocol=ttrpc version=3 Sep 12 17:46:00.521692 systemd[1]: Started cri-containerd-05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86.scope - libcontainer container 05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86. Sep 12 17:46:00.583679 containerd[1948]: time="2025-09-12T17:46:00.583595382Z" level=info msg="StartContainer for \"05889920766442c3dd692d3f6999e5146bdd51976b01038c1c3f6c5495149f86\" returns successfully" Sep 12 17:46:01.051163 kubelet[3314]: E0912 17:46:01.048943 3314 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-83?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"