Jul 10 00:20:54.896979 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:20:54.897019 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:54.897034 kernel: BIOS-provided physical RAM map: Jul 10 00:20:54.897046 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:20:54.897057 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 10 00:20:54.897068 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 10 00:20:54.897083 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 10 00:20:54.897096 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 10 00:20:54.897111 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 10 00:20:54.897122 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 10 00:20:54.897134 kernel: NX (Execute Disable) protection: active Jul 10 00:20:54.897145 kernel: APIC: Static calls initialized Jul 10 00:20:54.897157 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 10 00:20:54.897169 kernel: extended physical RAM map: Jul 10 00:20:54.897186 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:20:54.897199 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 10 00:20:54.897213 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 10 00:20:54.897227 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 10 00:20:54.897241 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 10 00:20:54.897254 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 10 00:20:54.897268 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 10 00:20:54.897282 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 10 00:20:54.897296 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 10 00:20:54.897310 kernel: efi: EFI v2.7 by EDK II Jul 10 00:20:54.897328 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 10 00:20:54.897343 kernel: secureboot: Secure boot disabled Jul 10 00:20:54.897356 kernel: SMBIOS 2.7 present. Jul 10 00:20:54.897371 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 10 00:20:54.897384 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:20:54.897398 kernel: Hypervisor detected: KVM Jul 10 00:20:54.897412 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:20:54.897426 kernel: kvm-clock: using sched offset of 5242907064 cycles Jul 10 00:20:54.897440 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:20:54.897454 kernel: tsc: Detected 2499.996 MHz processor Jul 10 00:20:54.897468 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:20:54.897485 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:20:54.897498 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 10 00:20:54.897512 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:20:54.897525 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:20:54.897538 kernel: Using GB pages for direct mapping Jul 10 00:20:54.897557 kernel: ACPI: Early table checksum verification disabled Jul 10 00:20:54.897575 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 10 00:20:54.897590 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 10 00:20:54.897604 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 10 00:20:54.897618 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 10 00:20:54.897632 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 10 00:20:54.897646 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 10 00:20:54.897660 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 10 00:20:54.897675 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 10 00:20:54.897693 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 10 00:20:54.897707 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 10 00:20:54.897722 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 10 00:20:54.897749 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 10 00:20:54.897762 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 10 00:20:54.897775 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 10 00:20:54.897787 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 10 00:20:54.897800 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 10 00:20:54.897817 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 10 00:20:54.897831 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 10 00:20:54.897846 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 10 00:20:54.897860 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 10 00:20:54.897874 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 10 00:20:54.897888 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 10 00:20:54.897903 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 10 00:20:54.897916 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 10 00:20:54.897930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 10 00:20:54.897944 kernel: NUMA: Initialized distance table, cnt=1 Jul 10 00:20:54.897962 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 10 00:20:54.897976 kernel: Zone ranges: Jul 10 00:20:54.897991 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:20:54.898005 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 10 00:20:54.898019 kernel: Normal empty Jul 10 00:20:54.898033 kernel: Device empty Jul 10 00:20:54.898048 kernel: Movable zone start for each node Jul 10 00:20:54.898062 kernel: Early memory node ranges Jul 10 00:20:54.898076 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:20:54.898093 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 10 00:20:54.898108 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 10 00:20:54.898122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 10 00:20:54.898137 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:20:54.898151 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:20:54.898166 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 10 00:20:54.898181 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 10 00:20:54.898195 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 10 00:20:54.898210 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:20:54.898224 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 10 00:20:54.898241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:20:54.898256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:20:54.898270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:20:54.898284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:20:54.898298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:20:54.898312 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:20:54.898327 kernel: TSC deadline timer available Jul 10 00:20:54.898340 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:20:54.898358 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:20:54.898375 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:20:54.898389 kernel: CPU topo: Max. threads per core: 2 Jul 10 00:20:54.898403 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:20:54.898418 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:20:54.898431 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:20:54.898447 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:20:54.898462 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 10 00:20:54.898477 kernel: Booting paravirtualized kernel on KVM Jul 10 00:20:54.898491 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:20:54.898508 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:20:54.898523 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:20:54.898536 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:20:54.898550 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:20:54.898564 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:20:54.898579 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:20:54.898596 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:54.898611 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:20:54.898636 kernel: random: crng init done Jul 10 00:20:54.898651 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:20:54.898665 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:20:54.898680 kernel: Fallback order for Node 0: 0 Jul 10 00:20:54.898695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 10 00:20:54.898709 kernel: Policy zone: DMA32 Jul 10 00:20:54.900785 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:20:54.900811 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:20:54.900827 kernel: Kernel/User page tables isolation: enabled Jul 10 00:20:54.900844 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:20:54.900859 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:20:54.900879 kernel: Dynamic Preempt: voluntary Jul 10 00:20:54.900894 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:20:54.900915 kernel: rcu: RCU event tracing is enabled. Jul 10 00:20:54.900930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:20:54.900946 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:20:54.900962 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:20:54.900980 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:20:54.900996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:20:54.901012 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:20:54.901028 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:20:54.901044 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:20:54.901060 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:20:54.901076 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 00:20:54.901092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:20:54.901110 kernel: Console: colour dummy device 80x25 Jul 10 00:20:54.901126 kernel: printk: legacy console [tty0] enabled Jul 10 00:20:54.901142 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:20:54.901158 kernel: ACPI: Core revision 20240827 Jul 10 00:20:54.901174 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 10 00:20:54.901190 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:20:54.901205 kernel: x2apic enabled Jul 10 00:20:54.901221 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:20:54.901237 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 10 00:20:54.901256 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 10 00:20:54.901272 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 10 00:20:54.901287 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 10 00:20:54.901303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:20:54.901318 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:20:54.901333 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:20:54.901349 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 10 00:20:54.901364 kernel: RETBleed: Vulnerable Jul 10 00:20:54.901379 kernel: Speculative Store Bypass: Vulnerable Jul 10 00:20:54.901394 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:20:54.901409 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:20:54.901428 kernel: GDS: Unknown: Dependent on hypervisor status Jul 10 00:20:54.901443 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:20:54.901458 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:20:54.901473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:20:54.901489 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:20:54.901504 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 10 00:20:54.901519 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 10 00:20:54.901535 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 10 00:20:54.901550 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 10 00:20:54.901566 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 10 00:20:54.901584 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 10 00:20:54.901599 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:20:54.901614 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 10 00:20:54.901630 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 10 00:20:54.901645 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 10 00:20:54.901660 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 10 00:20:54.901675 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 10 00:20:54.901690 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 10 00:20:54.901705 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 10 00:20:54.901721 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:20:54.905493 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:20:54.905511 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:20:54.905533 kernel: landlock: Up and running. Jul 10 00:20:54.905548 kernel: SELinux: Initializing. Jul 10 00:20:54.905563 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.905579 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.905594 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 10 00:20:54.905609 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 10 00:20:54.905625 kernel: signal: max sigframe size: 3632 Jul 10 00:20:54.905640 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:20:54.905654 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:20:54.905669 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:20:54.905685 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:20:54.905707 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:20:54.905771 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:20:54.905786 kernel: .... node #0, CPUs: #1 Jul 10 00:20:54.905802 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 10 00:20:54.905818 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 10 00:20:54.905832 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:20:54.905847 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 10 00:20:54.905862 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 125188K reserved, 0K cma-reserved) Jul 10 00:20:54.905880 kernel: devtmpfs: initialized Jul 10 00:20:54.905895 kernel: x86/mm: Memory block size: 128MB Jul 10 00:20:54.905910 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 10 00:20:54.905927 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:20:54.905943 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.905959 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:20:54.905975 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:20:54.905991 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:20:54.906011 kernel: audit: type=2000 audit(1752106852.252:1): state=initialized audit_enabled=0 res=1 Jul 10 00:20:54.906026 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:20:54.906040 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:20:54.906055 kernel: cpuidle: using governor menu Jul 10 00:20:54.906069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:20:54.906083 kernel: dca service started, version 1.12.1 Jul 10 00:20:54.906098 kernel: PCI: Using configuration type 1 for base access Jul 10 00:20:54.906112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:20:54.906127 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:20:54.906144 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:20:54.906159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:20:54.906173 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:20:54.906188 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:20:54.906203 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:20:54.906217 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:20:54.906232 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 10 00:20:54.906246 kernel: ACPI: Interpreter enabled Jul 10 00:20:54.906260 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:20:54.906275 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:20:54.906293 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:20:54.906308 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:20:54.906323 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 10 00:20:54.906337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:20:54.906562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:20:54.906710 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 10 00:20:54.907944 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 10 00:20:54.907973 kernel: acpiphp: Slot [3] registered Jul 10 00:20:54.907989 kernel: acpiphp: Slot [4] registered Jul 10 00:20:54.908004 kernel: acpiphp: Slot [5] registered Jul 10 00:20:54.908019 kernel: acpiphp: Slot [6] registered Jul 10 00:20:54.908034 kernel: acpiphp: Slot [7] registered Jul 10 00:20:54.908049 kernel: acpiphp: Slot [8] registered Jul 10 00:20:54.908063 kernel: acpiphp: Slot [9] registered Jul 10 00:20:54.908078 kernel: acpiphp: Slot [10] registered Jul 10 00:20:54.908093 kernel: acpiphp: Slot [11] registered Jul 10 00:20:54.908110 kernel: acpiphp: Slot [12] registered Jul 10 00:20:54.908125 kernel: acpiphp: Slot [13] registered Jul 10 00:20:54.908140 kernel: acpiphp: Slot [14] registered Jul 10 00:20:54.908154 kernel: acpiphp: Slot [15] registered Jul 10 00:20:54.908169 kernel: acpiphp: Slot [16] registered Jul 10 00:20:54.908183 kernel: acpiphp: Slot [17] registered Jul 10 00:20:54.908198 kernel: acpiphp: Slot [18] registered Jul 10 00:20:54.908212 kernel: acpiphp: Slot [19] registered Jul 10 00:20:54.908227 kernel: acpiphp: Slot [20] registered Jul 10 00:20:54.908244 kernel: acpiphp: Slot [21] registered Jul 10 00:20:54.908258 kernel: acpiphp: Slot [22] registered Jul 10 00:20:54.908273 kernel: acpiphp: Slot [23] registered Jul 10 00:20:54.908287 kernel: acpiphp: Slot [24] registered Jul 10 00:20:54.908302 kernel: acpiphp: Slot [25] registered Jul 10 00:20:54.908316 kernel: acpiphp: Slot [26] registered Jul 10 00:20:54.908330 kernel: acpiphp: Slot [27] registered Jul 10 00:20:54.908345 kernel: acpiphp: Slot [28] registered Jul 10 00:20:54.908359 kernel: acpiphp: Slot [29] registered Jul 10 00:20:54.908374 kernel: acpiphp: Slot [30] registered Jul 10 00:20:54.908391 kernel: acpiphp: Slot [31] registered Jul 10 00:20:54.908405 kernel: PCI host bridge to bus 0000:00 Jul 10 00:20:54.908539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:20:54.908659 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:20:54.908793 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:20:54.908909 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 10 00:20:54.909024 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 10 00:20:54.909147 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:20:54.909298 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:20:54.909438 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:20:54.909575 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 10 00:20:54.909706 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 10 00:20:54.909861 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 10 00:20:54.909996 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 10 00:20:54.910125 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 10 00:20:54.910253 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 10 00:20:54.910382 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 10 00:20:54.910510 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 10 00:20:54.910661 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:20:54.910815 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 10 00:20:54.910950 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:20:54.911078 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:20:54.911213 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 10 00:20:54.911343 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 10 00:20:54.911477 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 10 00:20:54.911606 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 10 00:20:54.911625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:20:54.911644 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:20:54.911659 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:20:54.911673 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:20:54.911688 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 10 00:20:54.911703 kernel: iommu: Default domain type: Translated Jul 10 00:20:54.911718 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:20:54.913768 kernel: efivars: Registered efivars operations Jul 10 00:20:54.913787 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:20:54.913802 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:20:54.913821 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 10 00:20:54.913836 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 10 00:20:54.913851 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 10 00:20:54.914010 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 10 00:20:54.914140 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 10 00:20:54.914267 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:20:54.914287 kernel: vgaarb: loaded Jul 10 00:20:54.914302 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 10 00:20:54.914320 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 10 00:20:54.914335 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:20:54.914349 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:20:54.914365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:20:54.914379 kernel: pnp: PnP ACPI init Jul 10 00:20:54.914394 kernel: pnp: PnP ACPI: found 5 devices Jul 10 00:20:54.914409 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:20:54.914424 kernel: NET: Registered PF_INET protocol family Jul 10 00:20:54.914439 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:20:54.914456 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 00:20:54.914471 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:20:54.914487 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:20:54.914501 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 10 00:20:54.914516 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 00:20:54.914533 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.914546 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.914559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:20:54.914572 kernel: NET: Registered PF_XDP protocol family Jul 10 00:20:54.914709 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:20:54.914865 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:20:54.914978 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:20:54.915090 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 10 00:20:54.915200 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 10 00:20:54.915336 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 00:20:54.915356 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:20:54.915372 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 00:20:54.915392 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 10 00:20:54.915406 kernel: clocksource: Switched to clocksource tsc Jul 10 00:20:54.915420 kernel: Initialise system trusted keyrings Jul 10 00:20:54.915435 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 00:20:54.915449 kernel: Key type asymmetric registered Jul 10 00:20:54.915463 kernel: Asymmetric key parser 'x509' registered Jul 10 00:20:54.915477 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:20:54.915491 kernel: io scheduler mq-deadline registered Jul 10 00:20:54.915506 kernel: io scheduler kyber registered Jul 10 00:20:54.915522 kernel: io scheduler bfq registered Jul 10 00:20:54.915537 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:20:54.915551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:20:54.915565 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:20:54.915580 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:20:54.915594 kernel: i8042: Warning: Keylock active Jul 10 00:20:54.915608 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:20:54.915622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:20:54.917375 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 10 00:20:54.917525 kernel: rtc_cmos 00:00: registered as rtc0 Jul 10 00:20:54.917650 kernel: rtc_cmos 00:00: setting system clock to 2025-07-10T00:20:54 UTC (1752106854) Jul 10 00:20:54.917799 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 10 00:20:54.917820 kernel: intel_pstate: CPU model not supported Jul 10 00:20:54.917863 kernel: efifb: probing for efifb Jul 10 00:20:54.917883 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 10 00:20:54.917900 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 10 00:20:54.917921 kernel: efifb: scrolling: redraw Jul 10 00:20:54.917938 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:20:54.917955 kernel: Console: switching to colour frame buffer device 100x37 Jul 10 00:20:54.917971 kernel: fb0: EFI VGA frame buffer device Jul 10 00:20:54.917988 kernel: pstore: Using crash dump compression: deflate Jul 10 00:20:54.918005 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:20:54.918022 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:20:54.918038 kernel: Segment Routing with IPv6 Jul 10 00:20:54.918055 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:20:54.918072 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:20:54.918091 kernel: Key type dns_resolver registered Jul 10 00:20:54.918108 kernel: IPI shorthand broadcast: enabled Jul 10 00:20:54.918125 kernel: sched_clock: Marking stable (2673002730, 141202162)->(2888096040, -73891148) Jul 10 00:20:54.918141 kernel: registered taskstats version 1 Jul 10 00:20:54.918158 kernel: Loading compiled-in X.509 certificates Jul 10 00:20:54.918175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:20:54.918191 kernel: Demotion targets for Node 0: null Jul 10 00:20:54.918208 kernel: Key type .fscrypt registered Jul 10 00:20:54.918224 kernel: Key type fscrypt-provisioning registered Jul 10 00:20:54.918244 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:20:54.918261 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:20:54.918278 kernel: ima: No architecture policies found Jul 10 00:20:54.918294 kernel: clk: Disabling unused clocks Jul 10 00:20:54.918311 kernel: Warning: unable to open an initial console. Jul 10 00:20:54.918328 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:20:54.918348 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:20:54.918364 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:20:54.918384 kernel: Run /init as init process Jul 10 00:20:54.918404 kernel: with arguments: Jul 10 00:20:54.918420 kernel: /init Jul 10 00:20:54.918436 kernel: with environment: Jul 10 00:20:54.918453 kernel: HOME=/ Jul 10 00:20:54.918469 kernel: TERM=linux Jul 10 00:20:54.918489 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:20:54.918508 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:20:54.918530 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:20:54.918548 systemd[1]: Detected virtualization amazon. Jul 10 00:20:54.918565 systemd[1]: Detected architecture x86-64. Jul 10 00:20:54.918582 systemd[1]: Running in initrd. Jul 10 00:20:54.918599 systemd[1]: No hostname configured, using default hostname. Jul 10 00:20:54.918628 systemd[1]: Hostname set to . Jul 10 00:20:54.918646 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:20:54.918663 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:20:54.918681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:20:54.918699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:20:54.918718 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:20:54.918749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:20:54.918766 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:20:54.918789 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:20:54.918807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:20:54.919565 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:20:54.919587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:20:54.919604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:20:54.919623 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:20:54.919639 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:20:54.919661 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:20:54.919676 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:20:54.919693 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:20:54.919710 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:20:54.919755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:20:54.919771 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:20:54.919786 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:20:54.919800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:20:54.919821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:20:54.919838 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:20:54.919856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:20:54.919872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:20:54.919888 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:20:54.919904 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:20:54.919921 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:20:54.919936 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:20:54.919951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:20:54.919971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:54.919986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:20:54.920002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:20:54.920047 systemd-journald[207]: Collecting audit messages is disabled. Jul 10 00:20:54.920084 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:20:54.920100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:20:54.920117 systemd-journald[207]: Journal started Jul 10 00:20:54.920155 systemd-journald[207]: Runtime Journal (/run/log/journal/ec24924339e0f2f67c0c1a7a2a3c4852) is 4.8M, max 38.4M, 33.6M free. Jul 10 00:20:54.922903 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:20:54.923413 systemd-modules-load[208]: Inserted module 'overlay' Jul 10 00:20:54.930004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:54.933883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:20:54.938566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:20:54.943958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:20:54.950232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:20:54.972144 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:20:54.990207 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:20:54.990247 kernel: Bridge firewalling registered Jul 10 00:20:54.979507 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 10 00:20:54.982856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:20:54.990452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:20:54.991481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:20:54.992397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:20:54.995800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:20:54.998863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:20:55.002778 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:20:55.019095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:20:55.024876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:20:55.027646 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:55.084054 systemd-resolved[254]: Positive Trust Anchors: Jul 10 00:20:55.084070 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:20:55.084134 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:20:55.091283 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 10 00:20:55.095229 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:20:55.095937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:20:55.126798 kernel: SCSI subsystem initialized Jul 10 00:20:55.136755 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:20:55.147763 kernel: iscsi: registered transport (tcp) Jul 10 00:20:55.170044 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:20:55.170135 kernel: QLogic iSCSI HBA Driver Jul 10 00:20:55.188202 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:20:55.210971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:20:55.213110 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:20:55.257855 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:20:55.259645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:20:55.307781 kernel: raid6: avx512x4 gen() 15121 MB/s Jul 10 00:20:55.325759 kernel: raid6: avx512x2 gen() 15128 MB/s Jul 10 00:20:55.343759 kernel: raid6: avx512x1 gen() 15124 MB/s Jul 10 00:20:55.361754 kernel: raid6: avx2x4 gen() 15086 MB/s Jul 10 00:20:55.379758 kernel: raid6: avx2x2 gen() 15120 MB/s Jul 10 00:20:55.397928 kernel: raid6: avx2x1 gen() 11501 MB/s Jul 10 00:20:55.397973 kernel: raid6: using algorithm avx512x2 gen() 15128 MB/s Jul 10 00:20:55.416950 kernel: raid6: .... xor() 24355 MB/s, rmw enabled Jul 10 00:20:55.417000 kernel: raid6: using avx512x2 recovery algorithm Jul 10 00:20:55.438763 kernel: xor: automatically using best checksumming function avx Jul 10 00:20:55.606762 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:20:55.613008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:20:55.615080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:20:55.642251 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 10 00:20:55.648999 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:20:55.653870 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:20:55.685587 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 10 00:20:55.688234 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:20:55.714290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:20:55.716000 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:20:55.774044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:20:55.778295 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:20:55.840030 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 10 00:20:55.840253 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 10 00:20:55.849075 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:20:55.849129 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 10 00:20:55.861767 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:d0:49:4c:e5:e5 Jul 10 00:20:55.867459 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 10 00:20:55.867673 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 10 00:20:55.866281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:20:55.866352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:55.872607 kernel: AES CTR mode by8 optimization enabled Jul 10 00:20:55.871028 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:55.874909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:55.883174 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:20:55.888746 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:20:55.901927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:20:55.901992 kernel: GPT:9289727 != 16777215 Jul 10 00:20:55.902011 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:20:55.902030 kernel: GPT:9289727 != 16777215 Jul 10 00:20:55.902047 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:20:55.902065 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:20:55.914319 (udev-worker)[511]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:20:55.925546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:55.953756 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:20:56.030984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 10 00:20:56.054743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 00:20:56.062765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:20:56.074849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 10 00:20:56.084526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 10 00:20:56.085127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 10 00:20:56.086535 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:20:56.087629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:20:56.088752 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:20:56.090422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:20:56.094869 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:20:56.110774 disk-uuid[697]: Primary Header is updated. Jul 10 00:20:56.110774 disk-uuid[697]: Secondary Entries is updated. Jul 10 00:20:56.110774 disk-uuid[697]: Secondary Header is updated. Jul 10 00:20:56.117981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:20:56.119367 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:20:57.128759 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:20:57.128816 disk-uuid[700]: The operation has completed successfully. Jul 10 00:20:57.276173 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:20:57.276312 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:20:57.306740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:20:57.331082 sh[965]: Success Jul 10 00:20:57.363095 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:20:57.363172 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:20:57.363194 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:20:57.375993 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 10 00:20:57.484949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:20:57.487558 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:20:57.496341 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:20:57.515142 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:20:57.515209 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (988) Jul 10 00:20:57.521273 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:20:57.521328 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:57.521342 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:20:57.681832 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:20:57.682820 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:20:57.683366 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:20:57.684114 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:20:57.685673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:20:57.725752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:12) scanned by mount (1021) Jul 10 00:20:57.733274 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:57.733353 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:57.733375 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:20:57.745752 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:57.747545 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:20:57.750496 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:20:57.785851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:20:57.788360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:20:57.825822 systemd-networkd[1157]: lo: Link UP Jul 10 00:20:57.825832 systemd-networkd[1157]: lo: Gained carrier Jul 10 00:20:57.827979 systemd-networkd[1157]: Enumeration completed Jul 10 00:20:57.828402 systemd-networkd[1157]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:20:57.828408 systemd-networkd[1157]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:20:57.830006 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:20:57.831467 systemd[1]: Reached target network.target - Network. Jul 10 00:20:57.832295 systemd-networkd[1157]: eth0: Link UP Jul 10 00:20:57.832300 systemd-networkd[1157]: eth0: Gained carrier Jul 10 00:20:57.832314 systemd-networkd[1157]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:20:57.844816 systemd-networkd[1157]: eth0: DHCPv4 address 172.31.26.174/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 00:20:58.254766 ignition[1110]: Ignition 2.21.0 Jul 10 00:20:58.254779 ignition[1110]: Stage: fetch-offline Jul 10 00:20:58.254945 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.254953 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:58.256516 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:20:58.255329 ignition[1110]: Ignition finished successfully Jul 10 00:20:58.258260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:20:58.284984 ignition[1167]: Ignition 2.21.0 Jul 10 00:20:58.285014 ignition[1167]: Stage: fetch Jul 10 00:20:58.285357 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.285371 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:58.285470 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:58.293163 ignition[1167]: PUT result: OK Jul 10 00:20:58.294701 ignition[1167]: parsed url from cmdline: "" Jul 10 00:20:58.294711 ignition[1167]: no config URL provided Jul 10 00:20:58.294718 ignition[1167]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:20:58.294746 ignition[1167]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:20:58.294770 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:58.295293 ignition[1167]: PUT result: OK Jul 10 00:20:58.295330 ignition[1167]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 10 00:20:58.295887 ignition[1167]: GET result: OK Jul 10 00:20:58.295949 ignition[1167]: parsing config with SHA512: 72037edc250a949c1ab6f4c6d9e228ee2c069827b68aed57a3b3f8f654684f3fdab8078421d23f39ba2a1cb3ff00ff7a4d6ff3c4c89f8893231fd72b9edf13f1 Jul 10 00:20:58.299665 unknown[1167]: fetched base config from "system" Jul 10 00:20:58.299677 unknown[1167]: fetched base config from "system" Jul 10 00:20:58.300004 ignition[1167]: fetch: fetch complete Jul 10 00:20:58.299682 unknown[1167]: fetched user config from "aws" Jul 10 00:20:58.300009 ignition[1167]: fetch: fetch passed Jul 10 00:20:58.300049 ignition[1167]: Ignition finished successfully Jul 10 00:20:58.302164 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:20:58.303705 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:20:58.333221 ignition[1173]: Ignition 2.21.0 Jul 10 00:20:58.333238 ignition[1173]: Stage: kargs Jul 10 00:20:58.333624 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.333637 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:58.333787 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:58.334824 ignition[1173]: PUT result: OK Jul 10 00:20:58.337363 ignition[1173]: kargs: kargs passed Jul 10 00:20:58.337448 ignition[1173]: Ignition finished successfully Jul 10 00:20:58.339111 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:20:58.340880 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:20:58.365338 ignition[1179]: Ignition 2.21.0 Jul 10 00:20:58.365355 ignition[1179]: Stage: disks Jul 10 00:20:58.365717 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.365757 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:58.365869 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:58.367421 ignition[1179]: PUT result: OK Jul 10 00:20:58.371326 ignition[1179]: disks: disks passed Jul 10 00:20:58.371397 ignition[1179]: Ignition finished successfully Jul 10 00:20:58.372879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:20:58.373796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:20:58.374414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:20:58.374856 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:20:58.375378 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:20:58.375928 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:20:58.377541 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:20:58.425848 systemd-fsck[1187]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:20:58.428983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:20:58.430312 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:20:58.567749 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:20:58.568660 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:20:58.569536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:20:58.571570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:20:58.573803 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:20:58.574909 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:20:58.575520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:20:58.575547 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:20:58.587066 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:20:58.589039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:20:58.606754 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:12) scanned by mount (1206) Jul 10 00:20:58.609883 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:58.609943 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:58.612278 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:20:58.618902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:20:58.972615 initrd-setup-root[1230]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:20:59.000601 initrd-setup-root[1237]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:20:59.004832 initrd-setup-root[1244]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:20:59.008929 initrd-setup-root[1251]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:20:59.328862 systemd-networkd[1157]: eth0: Gained IPv6LL Jul 10 00:20:59.364382 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:20:59.366050 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:20:59.367569 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:20:59.379556 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:20:59.381786 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:59.415847 ignition[1318]: INFO : Ignition 2.21.0 Jul 10 00:20:59.415847 ignition[1318]: INFO : Stage: mount Jul 10 00:20:59.417614 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:59.417614 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:59.417614 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:59.422229 ignition[1318]: INFO : PUT result: OK Jul 10 00:20:59.419020 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:20:59.423486 ignition[1318]: INFO : mount: mount passed Jul 10 00:20:59.424046 ignition[1318]: INFO : Ignition finished successfully Jul 10 00:20:59.425155 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:20:59.427205 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:20:59.570313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:20:59.605759 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:12) scanned by mount (1331) Jul 10 00:20:59.609909 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:59.609971 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:59.609985 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:20:59.617651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:20:59.648644 ignition[1347]: INFO : Ignition 2.21.0 Jul 10 00:20:59.648644 ignition[1347]: INFO : Stage: files Jul 10 00:20:59.650220 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:59.650220 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:20:59.650220 ignition[1347]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:20:59.651882 ignition[1347]: INFO : PUT result: OK Jul 10 00:20:59.655067 ignition[1347]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:20:59.663424 ignition[1347]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:20:59.663424 ignition[1347]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:20:59.679303 ignition[1347]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:20:59.680193 ignition[1347]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:20:59.680193 ignition[1347]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:20:59.679900 unknown[1347]: wrote ssh authorized keys file for user: core Jul 10 00:20:59.697033 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:20:59.698089 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 00:20:59.772077 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:21:00.147430 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:21:00.148509 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:00.148509 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:21:00.664526 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:21:01.108066 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:01.108066 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:01.109887 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:01.114822 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:01.114822 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:01.114822 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:21:01.117355 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:21:01.117355 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:21:01.117355 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 00:21:01.647860 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:21:04.032711 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:21:04.032711 ignition[1347]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:21:04.034698 ignition[1347]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:04.037760 ignition[1347]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:04.037760 ignition[1347]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:21:04.037760 ignition[1347]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:04.040035 ignition[1347]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:04.040035 ignition[1347]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:04.040035 ignition[1347]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:04.040035 ignition[1347]: INFO : files: files passed Jul 10 00:21:04.040035 ignition[1347]: INFO : Ignition finished successfully Jul 10 00:21:04.039544 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:21:04.041878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:21:04.044945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:21:04.055753 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:21:04.056495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:21:04.061699 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:04.061699 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:04.063986 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:04.065592 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:04.066222 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:21:04.068050 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:21:04.108250 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:21:04.108398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:21:04.109600 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:21:04.110767 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:21:04.111524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:21:04.112685 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:21:04.134473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:04.136585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:21:04.155299 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:04.156099 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:04.157191 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:21:04.158048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:21:04.158279 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:04.159441 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:21:04.160303 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:21:04.161092 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:21:04.161871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:04.162685 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:04.163390 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:04.164201 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:21:04.164962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:04.165800 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:21:04.167026 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:21:04.167869 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:21:04.168594 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:21:04.168852 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:04.169875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:04.170796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:04.171387 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:21:04.171523 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:04.172225 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:21:04.172445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:04.173752 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:21:04.173992 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:04.174804 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:21:04.174960 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:21:04.176606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:21:04.180372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:21:04.182519 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:21:04.182742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:04.183989 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:21:04.184160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:04.190540 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:21:04.191633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:21:04.212969 ignition[1401]: INFO : Ignition 2.21.0 Jul 10 00:21:04.212969 ignition[1401]: INFO : Stage: umount Jul 10 00:21:04.215865 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:04.215865 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:21:04.215865 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:21:04.217411 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:21:04.219199 ignition[1401]: INFO : PUT result: OK Jul 10 00:21:04.223383 ignition[1401]: INFO : umount: umount passed Jul 10 00:21:04.223383 ignition[1401]: INFO : Ignition finished successfully Jul 10 00:21:04.224181 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:21:04.224328 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:21:04.225596 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:21:04.225761 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:21:04.226779 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:21:04.226892 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:21:04.227603 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:21:04.227664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:21:04.228312 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:21:04.228371 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:21:04.228990 systemd[1]: Stopped target network.target - Network. Jul 10 00:21:04.229551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:21:04.229614 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:04.230216 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:21:04.230880 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:21:04.232797 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:04.233342 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:21:04.233939 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:21:04.234952 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:21:04.235009 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:04.235576 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:21:04.235629 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:04.236221 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:21:04.236292 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:21:04.236887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:21:04.236941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:04.237511 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:21:04.237571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:04.238328 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:21:04.239057 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:21:04.243822 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:21:04.243971 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:21:04.247655 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:21:04.248147 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:21:04.248279 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:21:04.250183 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:21:04.250968 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:21:04.251614 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:21:04.251675 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:04.253400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:21:04.253931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:21:04.253998 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:04.254643 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:21:04.254701 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:04.257872 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:21:04.257932 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:04.258897 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:21:04.258955 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:04.259865 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:04.261896 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:21:04.261983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:04.268110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:21:04.268309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:04.269782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:21:04.269852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:04.271073 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:21:04.271128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:04.272626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:21:04.272691 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:04.274083 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:21:04.274144 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:04.275195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:21:04.275262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:04.281537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:21:04.282036 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:21:04.282110 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:04.286147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:21:04.286227 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:04.286831 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:21:04.286895 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:04.288669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:21:04.288763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:04.289347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:04.289404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:04.292419 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:21:04.292494 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 00:21:04.292543 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:21:04.292593 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:04.295169 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:21:04.295293 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:21:04.302269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:21:04.302420 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:21:04.303577 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:21:04.305443 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:21:04.330887 systemd[1]: Switching root. Jul 10 00:21:04.385900 systemd-journald[207]: Journal stopped Jul 10 00:21:06.305384 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 10 00:21:06.305448 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:21:06.305464 kernel: SELinux: policy capability open_perms=1 Jul 10 00:21:06.305480 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:21:06.305501 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:21:06.305513 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:21:06.305525 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:21:06.305536 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:21:06.305550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:21:06.305562 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:21:06.305573 kernel: audit: type=1403 audit(1752106864.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:21:06.305586 systemd[1]: Successfully loaded SELinux policy in 84.573ms. Jul 10 00:21:06.305607 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.001ms. Jul 10 00:21:06.305625 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:06.305637 systemd[1]: Detected virtualization amazon. Jul 10 00:21:06.305651 systemd[1]: Detected architecture x86-64. Jul 10 00:21:06.305663 systemd[1]: Detected first boot. Jul 10 00:21:06.305675 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:06.305687 zram_generator::config[1445]: No configuration found. Jul 10 00:21:06.305701 kernel: Guest personality initialized and is inactive Jul 10 00:21:06.305714 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:21:06.306751 kernel: Initialized host personality Jul 10 00:21:06.306768 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:21:06.306781 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:21:06.306795 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:21:06.306809 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:21:06.306822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:21:06.306834 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:06.306847 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:21:06.306860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:21:06.306875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:21:06.306887 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:21:06.306900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:21:06.306913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:21:06.306925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:21:06.306937 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:21:06.306949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:06.306963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:06.306975 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:21:06.306990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:21:06.307008 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:21:06.307021 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:06.307034 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:21:06.307046 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:06.307059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:06.307071 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:21:06.307086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:21:06.307099 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:06.307112 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:21:06.307124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:06.307136 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:06.307149 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:06.307161 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:06.307173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:21:06.307186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:21:06.307201 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:21:06.307213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:06.307226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:06.307238 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:06.307251 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:21:06.307264 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:21:06.307276 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:21:06.307288 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:21:06.307301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.307315 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:21:06.307328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:21:06.307341 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:21:06.307354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:21:06.307367 systemd[1]: Reached target machines.target - Containers. Jul 10 00:21:06.307379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:21:06.307391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:06.307404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:06.307416 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:21:06.307431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:06.307444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:06.307456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:06.307469 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:21:06.307481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:06.307494 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:21:06.307507 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:21:06.307519 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:21:06.307534 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:21:06.307548 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:21:06.307561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:06.307573 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:06.307586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:06.307598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:06.307611 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:21:06.307623 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:21:06.307636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:06.307651 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:21:06.307666 systemd[1]: Stopped verity-setup.service. Jul 10 00:21:06.307679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.307692 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:21:06.307705 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:21:06.307719 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:21:06.309760 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:21:06.309778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:21:06.309792 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:21:06.309804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:06.309824 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:21:06.309837 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:21:06.309851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:06.309863 kernel: fuse: init (API version 7.41) Jul 10 00:21:06.309877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:06.309889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:06.309903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:06.309915 kernel: loop: module loaded Jul 10 00:21:06.309928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:21:06.309943 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:21:06.309956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:06.309969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:06.309981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:06.309994 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:06.310006 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:21:06.310054 systemd-journald[1521]: Collecting audit messages is disabled. Jul 10 00:21:06.310083 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:06.310096 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:21:06.310109 systemd-journald[1521]: Journal started Jul 10 00:21:06.310133 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec24924339e0f2f67c0c1a7a2a3c4852) is 4.8M, max 38.4M, 33.6M free. Jul 10 00:21:06.000344 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:21:06.021014 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:21:06.021440 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:21:06.324180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:21:06.324252 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:21:06.324271 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:06.327802 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:21:06.337831 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:21:06.337899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:06.355755 kernel: ACPI: bus type drm_connector registered Jul 10 00:21:06.361768 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:21:06.361834 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:06.371197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:21:06.374518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:06.379160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:06.385744 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:21:06.392763 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:21:06.394783 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:06.400885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:21:06.403059 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:06.403235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:06.403959 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:21:06.405136 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:21:06.405988 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:21:06.421835 kernel: loop0: detected capacity change from 0 to 146240 Jul 10 00:21:06.434492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:06.435407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:21:06.439993 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:21:06.442549 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:21:06.444902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:21:06.458270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:06.469655 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Jul 10 00:21:06.471056 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Jul 10 00:21:06.471596 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:21:06.477315 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec24924339e0f2f67c0c1a7a2a3c4852 is 14.828ms for 1027 entries. Jul 10 00:21:06.477315 systemd-journald[1521]: System Journal (/var/log/journal/ec24924339e0f2f67c0c1a7a2a3c4852) is 8M, max 195.6M, 187.6M free. Jul 10 00:21:06.509636 systemd-journald[1521]: Received client request to flush runtime journal. Jul 10 00:21:06.481804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:06.486863 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:21:06.511852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:21:06.553920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:21:06.556866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:06.568977 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:21:06.586219 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 10 00:21:06.586239 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 10 00:21:06.593079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:06.600757 kernel: loop1: detected capacity change from 0 to 113872 Jul 10 00:21:06.716763 kernel: loop2: detected capacity change from 0 to 224512 Jul 10 00:21:07.010849 kernel: loop3: detected capacity change from 0 to 72352 Jul 10 00:21:07.023485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:21:07.047755 kernel: loop4: detected capacity change from 0 to 146240 Jul 10 00:21:07.083753 kernel: loop5: detected capacity change from 0 to 113872 Jul 10 00:21:07.112764 kernel: loop6: detected capacity change from 0 to 224512 Jul 10 00:21:07.155755 kernel: loop7: detected capacity change from 0 to 72352 Jul 10 00:21:07.169152 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:21:07.171547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:07.179451 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 10 00:21:07.180018 (sd-merge)[1605]: Merged extensions into '/usr'. Jul 10 00:21:07.184425 systemd[1]: Reload requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:21:07.184545 systemd[1]: Reloading... Jul 10 00:21:07.209646 systemd-udevd[1607]: Using default interface naming scheme 'v255'. Jul 10 00:21:07.260756 zram_generator::config[1633]: No configuration found. Jul 10 00:21:07.421654 (udev-worker)[1664]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:21:07.480609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:07.609239 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:21:07.609372 systemd[1]: Reloading finished in 424 ms. Jul 10 00:21:07.612759 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 10 00:21:07.619192 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:21:07.625782 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 10 00:21:07.626044 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:21:07.626062 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 10 00:21:07.625703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:07.631329 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:21:07.631778 kernel: ACPI: button: Sleep Button [SLPF] Jul 10 00:21:07.653953 systemd[1]: Starting ensure-sysext.service... Jul 10 00:21:07.659983 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:07.664062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:07.706871 systemd[1]: Reload requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:21:07.706886 systemd[1]: Reloading... Jul 10 00:21:07.742311 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:21:07.742843 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:21:07.743102 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:21:07.743911 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:21:07.747546 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:21:07.748500 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jul 10 00:21:07.748572 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jul 10 00:21:07.763351 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:07.763364 systemd-tmpfiles[1735]: Skipping /boot Jul 10 00:21:07.785315 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:07.786946 systemd-tmpfiles[1735]: Skipping /boot Jul 10 00:21:07.804800 zram_generator::config[1779]: No configuration found. Jul 10 00:21:08.057241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:08.107637 ldconfig[1549]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:21:08.183435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 00:21:08.184367 systemd[1]: Reloading finished in 476 ms. Jul 10 00:21:08.196947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:21:08.210183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:08.262439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:08.263977 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:08.268989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:21:08.269909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:08.275842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:08.279283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:08.283074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:08.286054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:08.286842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:08.289852 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:21:08.290815 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:08.293991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:21:08.300090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:08.301293 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:21:08.306825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:21:08.315853 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:21:08.319978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:08.320570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:08.327844 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:08.328121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:08.339848 systemd[1]: Finished ensure-sysext.service. Jul 10 00:21:08.340876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:08.342059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:08.343222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:21:08.351118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:08.356820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:08.358686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:08.364613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:08.366926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:08.367824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:08.382220 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:21:08.431489 augenrules[1940]: No rules Jul 10 00:21:08.434055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:21:08.435103 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:08.435562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:08.440479 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:21:08.458153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:21:08.472348 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:21:08.506467 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:21:08.507632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:21:08.549440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:08.582440 systemd-networkd[1734]: lo: Link UP Jul 10 00:21:08.582453 systemd-networkd[1734]: lo: Gained carrier Jul 10 00:21:08.582883 systemd-resolved[1907]: Positive Trust Anchors: Jul 10 00:21:08.582894 systemd-resolved[1907]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:08.582932 systemd-resolved[1907]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:08.583983 systemd-networkd[1734]: Enumeration completed Jul 10 00:21:08.584086 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:08.585032 systemd-networkd[1734]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:08.586753 systemd-networkd[1734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:21:08.587897 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:21:08.588585 systemd-networkd[1734]: eth0: Link UP Jul 10 00:21:08.588779 systemd-networkd[1734]: eth0: Gained carrier Jul 10 00:21:08.588796 systemd-networkd[1734]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:08.589395 systemd-resolved[1907]: Defaulting to hostname 'linux'. Jul 10 00:21:08.590906 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:21:08.593219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:08.593674 systemd[1]: Reached target network.target - Network. Jul 10 00:21:08.594112 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:08.594811 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:08.595230 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:21:08.595574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:21:08.595941 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:21:08.596393 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:21:08.596820 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:21:08.597132 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:21:08.597427 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:21:08.597459 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:08.597765 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:08.598820 systemd-networkd[1734]: eth0: DHCPv4 address 172.31.26.174/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 00:21:08.599863 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:21:08.601769 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:21:08.604268 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:21:08.604837 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:21:08.605253 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:21:08.607473 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:21:08.608297 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:21:08.609427 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:21:08.610861 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:08.611201 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:08.611550 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:08.611571 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:08.612474 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:21:08.616011 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:21:08.617956 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:21:08.622852 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:21:08.625927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:21:08.627385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:21:08.627768 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:21:08.632651 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:21:08.640039 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:21:08.644207 systemd[1]: Started ntpd.service - Network Time Service. Jul 10 00:21:08.646925 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:21:08.653553 jq[1969]: false Jul 10 00:21:08.653182 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 10 00:21:08.657912 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:21:08.669189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:21:08.684755 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Refreshing passwd entry cache Jul 10 00:21:08.681961 oslogin_cache_refresh[1971]: Refreshing passwd entry cache Jul 10 00:21:08.689938 extend-filesystems[1970]: Found /dev/nvme0n1p6 Jul 10 00:21:08.695102 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:21:08.697163 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:21:08.697662 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:21:08.699442 oslogin_cache_refresh[1971]: Failure getting users, quitting Jul 10 00:21:08.700281 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Failure getting users, quitting Jul 10 00:21:08.700281 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:08.700281 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Refreshing group entry cache Jul 10 00:21:08.699722 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:21:08.699461 oslogin_cache_refresh[1971]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:08.699497 oslogin_cache_refresh[1971]: Refreshing group entry cache Jul 10 00:21:08.703248 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Failure getting groups, quitting Jul 10 00:21:08.703248 google_oslogin_nss_cache[1971]: oslogin_cache_refresh[1971]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:08.701873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:21:08.701539 oslogin_cache_refresh[1971]: Failure getting groups, quitting Jul 10 00:21:08.701550 oslogin_cache_refresh[1971]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:08.704869 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:21:08.707544 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:21:08.708661 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:21:08.709368 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:21:08.709626 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:21:08.710054 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:21:08.713489 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:21:08.714785 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:21:08.724630 extend-filesystems[1970]: Found /dev/nvme0n1p9 Jul 10 00:21:08.722240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:21:08.724964 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:21:08.733707 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Jul 10 00:21:08.750761 coreos-metadata[1966]: Jul 10 00:21:08.749 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 00:21:08.753151 coreos-metadata[1966]: Jul 10 00:21:08.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 10 00:21:08.754120 coreos-metadata[1966]: Jul 10 00:21:08.753 INFO Fetch successful Jul 10 00:21:08.754120 coreos-metadata[1966]: Jul 10 00:21:08.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 10 00:21:08.755144 coreos-metadata[1966]: Jul 10 00:21:08.755 INFO Fetch successful Jul 10 00:21:08.755326 coreos-metadata[1966]: Jul 10 00:21:08.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 10 00:21:08.757531 coreos-metadata[1966]: Jul 10 00:21:08.756 INFO Fetch successful Jul 10 00:21:08.757531 coreos-metadata[1966]: Jul 10 00:21:08.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 10 00:21:08.765473 coreos-metadata[1966]: Jul 10 00:21:08.764 INFO Fetch successful Jul 10 00:21:08.765473 coreos-metadata[1966]: Jul 10 00:21:08.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 10 00:21:08.766377 coreos-metadata[1966]: Jul 10 00:21:08.766 INFO Fetch failed with 404: resource not found Jul 10 00:21:08.766377 coreos-metadata[1966]: Jul 10 00:21:08.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 10 00:21:08.768972 coreos-metadata[1966]: Jul 10 00:21:08.766 INFO Fetch successful Jul 10 00:21:08.768972 coreos-metadata[1966]: Jul 10 00:21:08.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 10 00:21:08.769089 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Jul 10 00:21:08.769484 jq[1992]: true Jul 10 00:21:08.772365 coreos-metadata[1966]: Jul 10 00:21:08.771 INFO Fetch successful Jul 10 00:21:08.772365 coreos-metadata[1966]: Jul 10 00:21:08.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 10 00:21:08.776136 coreos-metadata[1966]: Jul 10 00:21:08.774 INFO Fetch successful Jul 10 00:21:08.776136 coreos-metadata[1966]: Jul 10 00:21:08.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 10 00:21:08.776483 coreos-metadata[1966]: Jul 10 00:21:08.776 INFO Fetch successful Jul 10 00:21:08.776483 coreos-metadata[1966]: Jul 10 00:21:08.776 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 10 00:21:08.778025 coreos-metadata[1966]: Jul 10 00:21:08.777 INFO Fetch successful Jul 10 00:21:08.793094 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:21:08.794660 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:35:50 UTC 2025 (1): Starting Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:35:50 UTC 2025 (1): Starting Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: ---------------------------------------------------- Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: available at https://www.nwtime.org/support Jul 10 00:21:08.796930 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: ---------------------------------------------------- Jul 10 00:21:08.794681 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 00:21:08.794688 ntpd[1973]: ---------------------------------------------------- Jul 10 00:21:08.794694 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 10 00:21:08.794700 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 00:21:08.794706 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 10 00:21:08.794712 ntpd[1973]: available at https://www.nwtime.org/support Jul 10 00:21:08.794718 ntpd[1973]: ---------------------------------------------------- Jul 10 00:21:08.799625 dbus-daemon[1967]: [system] SELinux support is enabled Jul 10 00:21:08.801189 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: proto: precision = 0.057 usec (-24) Jul 10 00:21:08.800820 ntpd[1973]: proto: precision = 0.057 usec (-24) Jul 10 00:21:08.801586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:21:08.811295 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: basedate set to 2025-06-27 Jul 10 00:21:08.811295 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: gps base set to 2025-06-29 (week 2373) Jul 10 00:21:08.806594 ntpd[1973]: basedate set to 2025-06-27 Jul 10 00:21:08.809237 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:21:08.806610 ntpd[1973]: gps base set to 2025-06-29 (week 2373) Jul 10 00:21:08.809275 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:21:08.809674 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:21:08.809692 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:21:08.815065 dbus-daemon[1967]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1734 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 10 00:21:08.818115 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 00:21:08.819515 tar[1995]: linux-amd64/LICENSE Jul 10 00:21:08.819515 tar[1995]: linux-amd64/helm Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listen normally on 3 eth0 172.31.26.174:123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: bind(21) AF_INET6 fe80::4d0:49ff:fe4c:e5e5%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: unable to create socket on eth0 (5) for fe80::4d0:49ff:fe4c:e5e5%2#123 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: failed to init interface for address fe80::4d0:49ff:fe4c:e5e5%2 Jul 10 00:21:08.824772 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 10 00:21:08.822257 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 00:21:08.822398 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 00:21:08.822423 ntpd[1973]: Listen normally on 3 eth0 172.31.26.174:123 Jul 10 00:21:08.822452 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 10 00:21:08.822489 ntpd[1973]: bind(21) AF_INET6 fe80::4d0:49ff:fe4c:e5e5%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:21:08.822504 ntpd[1973]: unable to create socket on eth0 (5) for fe80::4d0:49ff:fe4c:e5e5%2#123 Jul 10 00:21:08.822514 ntpd[1973]: failed to init interface for address fe80::4d0:49ff:fe4c:e5e5%2 Jul 10 00:21:08.822534 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 10 00:21:08.836240 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:21:08.836240 ntpd[1973]: 10 Jul 00:21:08 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:21:08.832441 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:21:08.836361 extend-filesystems[2025]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:21:08.839412 update_engine[1991]: I20250710 00:21:08.829232 1991 main.cc:92] Flatcar Update Engine starting Jul 10 00:21:08.832470 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:21:08.833966 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:21:08.842806 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 10 00:21:08.840608 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 10 00:21:08.842945 jq[2019]: true Jul 10 00:21:08.842893 systemd-logind[1989]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:21:08.842909 systemd-logind[1989]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 10 00:21:08.842926 systemd-logind[1989]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:21:08.855418 update_engine[1991]: I20250710 00:21:08.846834 1991 update_check_scheduler.cc:74] Next update check in 7m41s Jul 10 00:21:08.847118 systemd-logind[1989]: New seat seat0. Jul 10 00:21:08.855274 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:21:08.876864 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:21:08.877657 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:21:08.879255 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 10 00:21:08.890784 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:21:08.892084 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:21:08.943156 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 10 00:21:08.970628 extend-filesystems[2025]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 10 00:21:08.970628 extend-filesystems[2025]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:21:08.970628 extend-filesystems[2025]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 10 00:21:08.979397 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Jul 10 00:21:08.974951 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:21:08.975158 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:21:08.988754 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:08.993970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:21:09.000106 systemd[1]: Starting sshkeys.service... Jul 10 00:21:09.028517 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 00:21:09.031356 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 00:21:09.094337 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 10 00:21:09.100177 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 10 00:21:09.100751 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 10 00:21:09.110814 systemd[1]: Starting polkit.service - Authorization Manager... Jul 10 00:21:09.209305 polkitd[2124]: Started polkitd version 126 Jul 10 00:21:09.213066 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:21:09.220861 coreos-metadata[2095]: Jul 10 00:21:09.220 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 00:21:09.221843 coreos-metadata[2095]: Jul 10 00:21:09.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 10 00:21:09.226180 coreos-metadata[2095]: Jul 10 00:21:09.222 INFO Fetch successful Jul 10 00:21:09.226180 coreos-metadata[2095]: Jul 10 00:21:09.222 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 10 00:21:09.226180 coreos-metadata[2095]: Jul 10 00:21:09.226 INFO Fetch successful Jul 10 00:21:09.228641 unknown[2095]: wrote ssh authorized keys file for user: core Jul 10 00:21:09.269071 polkitd[2124]: Loading rules from directory /etc/polkit-1/rules.d Jul 10 00:21:09.272305 polkitd[2124]: Loading rules from directory /run/polkit-1/rules.d Jul 10 00:21:09.272518 polkitd[2124]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:21:09.273668 polkitd[2124]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 10 00:21:09.273694 polkitd[2124]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:21:09.277225 polkitd[2124]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 10 00:21:09.285773 polkitd[2124]: Finished loading, compiling and executing 2 rules Jul 10 00:21:09.287287 systemd[1]: Started polkit.service - Authorization Manager. Jul 10 00:21:09.298798 containerd[2015]: time="2025-07-10T00:21:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:21:09.299865 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 10 00:21:09.300836 containerd[2015]: time="2025-07-10T00:21:09.300445920Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:21:09.307741 polkitd[2124]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 10 00:21:09.314742 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:09.315492 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 00:21:09.321212 systemd[1]: Finished sshkeys.service. Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.325853335Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.522µs" Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.325891287Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.325914249Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.326929545Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.326960479Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.326990933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327039955Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327054037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327273815Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327291020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327306450Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:09.327818 containerd[2015]: time="2025-07-10T00:21:09.327315996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:21:09.328106 containerd[2015]: time="2025-07-10T00:21:09.327385544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.328106 containerd[2015]: time="2025-07-10T00:21:09.327550254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.328106 containerd[2015]: time="2025-07-10T00:21:09.327579982Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.328106 containerd[2015]: time="2025-07-10T00:21:09.327593711Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:21:09.328106 containerd[2015]: time="2025-07-10T00:21:09.327616192Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:21:09.329539 containerd[2015]: time="2025-07-10T00:21:09.328079738Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:21:09.329539 containerd[2015]: time="2025-07-10T00:21:09.328364756Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333522419Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333583079Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333614489Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333657386Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333670894Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333681618Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333693704Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333705662Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333717263Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333744168Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333753912Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333765219Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333876014Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:21:09.336892 containerd[2015]: time="2025-07-10T00:21:09.333895284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333908199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333918618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333930293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333940507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333959464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333982135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.333996454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.334006861Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.334016330Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.334077926Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.334092145Z" level=info msg="Start snapshots syncer" Jul 10 00:21:09.337232 containerd[2015]: time="2025-07-10T00:21:09.334122981Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:21:09.337470 containerd[2015]: time="2025-07-10T00:21:09.334381758Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:21:09.337470 containerd[2015]: time="2025-07-10T00:21:09.334425111Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:21:09.337596 containerd[2015]: time="2025-07-10T00:21:09.337429022Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:21:09.337596 containerd[2015]: time="2025-07-10T00:21:09.337585058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337608881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337620148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337631977Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337651678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337662336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:21:09.337677 containerd[2015]: time="2025-07-10T00:21:09.337673605Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.337703384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.337713411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.337736787Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338002482Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338024997Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338033072Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338245174Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338255359Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338265300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338278415Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338295638Z" level=info msg="runtime interface created" Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338300777Z" level=info msg="created NRI interface" Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338308344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:21:09.339189 containerd[2015]: time="2025-07-10T00:21:09.338322057Z" level=info msg="Connect containerd service" Jul 10 00:21:09.343871 containerd[2015]: time="2025-07-10T00:21:09.343163792Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:21:09.344443 containerd[2015]: time="2025-07-10T00:21:09.344414560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:21:09.350947 locksmithd[2040]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:21:09.380928 systemd-resolved[1907]: System hostname changed to 'ip-172-31-26-174'. Jul 10 00:21:09.381001 systemd-hostnamed[2029]: Hostname set to (transient) Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679472993Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679532216Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679559887Z" level=info msg="Start subscribing containerd event" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679583672Z" level=info msg="Start recovering state" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679664677Z" level=info msg="Start event monitor" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679673875Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679682140Z" level=info msg="Start streaming server" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679692245Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679698901Z" level=info msg="runtime interface starting up..." Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679704887Z" level=info msg="starting plugins..." Jul 10 00:21:09.679837 containerd[2015]: time="2025-07-10T00:21:09.679716396Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:21:09.683151 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:21:09.685608 containerd[2015]: time="2025-07-10T00:21:09.683027485Z" level=info msg="containerd successfully booted in 0.384695s" Jul 10 00:21:09.695868 systemd-networkd[1734]: eth0: Gained IPv6LL Jul 10 00:21:09.701142 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:21:09.702214 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:21:09.707932 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 10 00:21:09.710883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:09.716984 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:21:09.765605 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:21:09.782445 sshd_keygen[2024]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:21:09.810643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:21:09.813948 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:21:09.815688 systemd[1]: Started sshd@0-172.31.26.174:22-139.178.89.65:35180.service - OpenSSH per-connection server daemon (139.178.89.65:35180). Jul 10 00:21:09.818886 amazon-ssm-agent[2193]: Initializing new seelog logger Jul 10 00:21:09.819481 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Jul 10 00:21:09.819817 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.820206 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.820593 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 processing appconfig overrides Jul 10 00:21:09.821416 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.821789 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.821904 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 processing appconfig overrides Jul 10 00:21:09.822488 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.822755 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.822887 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 processing appconfig overrides Jul 10 00:21:09.823294 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8213 INFO Proxy environment variables: Jul 10 00:21:09.827605 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.827605 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:09.827605 amazon-ssm-agent[2193]: 2025/07/10 00:21:09 processing appconfig overrides Jul 10 00:21:09.844158 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:21:09.844364 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:21:09.848088 tar[1995]: linux-amd64/README.md Jul 10 00:21:09.850128 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:21:09.866353 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:21:09.876002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:21:09.879998 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:21:09.882316 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:21:09.882887 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:21:09.924716 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8213 INFO http_proxy: Jul 10 00:21:10.014891 amazon-ssm-agent[2193]: 2025/07/10 00:21:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:10.014891 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:21:10.014891 amazon-ssm-agent[2193]: 2025/07/10 00:21:10 processing appconfig overrides Jul 10 00:21:10.022787 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8213 INFO no_proxy: Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8213 INFO https_proxy: Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8222 INFO Checking if agent identity type OnPrem can be assumed Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8224 INFO Checking if agent identity type EC2 can be assumed Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8980 INFO Agent will take identity from EC2 Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8995 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8995 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8995 INFO [amazon-ssm-agent] Starting Core Agent Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8995 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.8995 INFO [Registrar] Starting registrar module Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.9014 INFO [EC2Identity] Checking disk for registration info Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.9014 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.9014 INFO [EC2Identity] Generating registration keypair Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.9744 INFO [EC2Identity] Checking write access before registering Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:09.9748 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0146 INFO [EC2Identity] EC2 registration was successful. Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0146 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0147 INFO [CredentialRefresher] credentialRefresher has started Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0147 INFO [CredentialRefresher] Starting credentials refresher loop Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0432 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 10 00:21:10.044370 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0434 INFO [CredentialRefresher] Credentials ready Jul 10 00:21:10.082192 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 35180 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:10.084657 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:10.091467 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:21:10.092856 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:21:10.103790 systemd-logind[1989]: New session 1 of user core. Jul 10 00:21:10.114099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:21:10.117325 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:21:10.119836 amazon-ssm-agent[2193]: 2025-07-10 00:21:10.0442 INFO [CredentialRefresher] Next credential rotation will be in 29.999982105533334 minutes Jul 10 00:21:10.132298 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:21:10.135205 systemd-logind[1989]: New session c1 of user core. Jul 10 00:21:10.298325 systemd[2235]: Queued start job for default target default.target. Jul 10 00:21:10.304892 systemd[2235]: Created slice app.slice - User Application Slice. Jul 10 00:21:10.304935 systemd[2235]: Reached target paths.target - Paths. Jul 10 00:21:10.304991 systemd[2235]: Reached target timers.target - Timers. Jul 10 00:21:10.306464 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:21:10.319673 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:21:10.319845 systemd[2235]: Reached target sockets.target - Sockets. Jul 10 00:21:10.319908 systemd[2235]: Reached target basic.target - Basic System. Jul 10 00:21:10.319961 systemd[2235]: Reached target default.target - Main User Target. Jul 10 00:21:10.320003 systemd[2235]: Startup finished in 177ms. Jul 10 00:21:10.320167 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:21:10.325940 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:21:10.477274 systemd[1]: Started sshd@1-172.31.26.174:22-139.178.89.65:58454.service - OpenSSH per-connection server daemon (139.178.89.65:58454). Jul 10 00:21:10.657055 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 58454 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:10.658291 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:10.664030 systemd-logind[1989]: New session 2 of user core. Jul 10 00:21:10.671964 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:21:10.788897 sshd[2248]: Connection closed by 139.178.89.65 port 58454 Jul 10 00:21:10.789416 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:10.793819 systemd[1]: sshd@1-172.31.26.174:22-139.178.89.65:58454.service: Deactivated successfully. Jul 10 00:21:10.796634 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:21:10.798468 systemd-logind[1989]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:21:10.800662 systemd-logind[1989]: Removed session 2. Jul 10 00:21:10.819073 systemd[1]: Started sshd@2-172.31.26.174:22-139.178.89.65:58468.service - OpenSSH per-connection server daemon (139.178.89.65:58468). Jul 10 00:21:10.991442 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 58468 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:10.992968 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:10.999107 systemd-logind[1989]: New session 3 of user core. Jul 10 00:21:11.002968 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:21:11.056441 amazon-ssm-agent[2193]: 2025-07-10 00:21:11.0563 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 10 00:21:11.123606 sshd[2256]: Connection closed by 139.178.89.65 port 58468 Jul 10 00:21:11.125692 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:11.131287 systemd[1]: sshd@2-172.31.26.174:22-139.178.89.65:58468.service: Deactivated successfully. Jul 10 00:21:11.134977 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:21:11.137000 systemd-logind[1989]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:21:11.141360 systemd-logind[1989]: Removed session 3. Jul 10 00:21:11.157808 amazon-ssm-agent[2193]: 2025-07-10 00:21:11.0579 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2259) started Jul 10 00:21:11.258891 amazon-ssm-agent[2193]: 2025-07-10 00:21:11.0580 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 10 00:21:11.795088 ntpd[1973]: Listen normally on 6 eth0 [fe80::4d0:49ff:fe4c:e5e5%2]:123 Jul 10 00:21:11.795448 ntpd[1973]: 10 Jul 00:21:11 ntpd[1973]: Listen normally on 6 eth0 [fe80::4d0:49ff:fe4c:e5e5%2]:123 Jul 10 00:21:14.200527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:14.201920 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:21:14.205499 systemd[1]: Startup finished in 2.776s (kernel) + 10.275s (initrd) + 9.311s (userspace) = 22.362s. Jul 10 00:21:14.210839 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:15.412180 kubelet[2279]: E0710 00:21:15.412111 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:15.414820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:15.415035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:15.415567 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 264.2M memory peak. Jul 10 00:21:16.371702 systemd-resolved[1907]: Clock change detected. Flushing caches. Jul 10 00:21:21.734405 systemd[1]: Started sshd@3-172.31.26.174:22-139.178.89.65:50742.service - OpenSSH per-connection server daemon (139.178.89.65:50742). Jul 10 00:21:21.912777 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 50742 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:21.914065 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:21.919221 systemd-logind[1989]: New session 4 of user core. Jul 10 00:21:21.926396 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:21:22.047460 sshd[2293]: Connection closed by 139.178.89.65 port 50742 Jul 10 00:21:22.048232 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:22.052292 systemd[1]: sshd@3-172.31.26.174:22-139.178.89.65:50742.service: Deactivated successfully. Jul 10 00:21:22.054011 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:21:22.054674 systemd-logind[1989]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:21:22.056214 systemd-logind[1989]: Removed session 4. Jul 10 00:21:22.083392 systemd[1]: Started sshd@4-172.31.26.174:22-139.178.89.65:50748.service - OpenSSH per-connection server daemon (139.178.89.65:50748). Jul 10 00:21:22.256238 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 50748 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:22.257625 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:22.262881 systemd-logind[1989]: New session 5 of user core. Jul 10 00:21:22.268403 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:21:22.385180 sshd[2301]: Connection closed by 139.178.89.65 port 50748 Jul 10 00:21:22.385707 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:22.389619 systemd[1]: sshd@4-172.31.26.174:22-139.178.89.65:50748.service: Deactivated successfully. Jul 10 00:21:22.391498 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:21:22.392320 systemd-logind[1989]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:21:22.393530 systemd-logind[1989]: Removed session 5. Jul 10 00:21:22.419655 systemd[1]: Started sshd@5-172.31.26.174:22-139.178.89.65:50758.service - OpenSSH per-connection server daemon (139.178.89.65:50758). Jul 10 00:21:22.591813 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 50758 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:22.593256 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:22.599600 systemd-logind[1989]: New session 6 of user core. Jul 10 00:21:22.606396 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:21:22.726501 sshd[2309]: Connection closed by 139.178.89.65 port 50758 Jul 10 00:21:22.727498 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:22.731072 systemd[1]: sshd@5-172.31.26.174:22-139.178.89.65:50758.service: Deactivated successfully. Jul 10 00:21:22.732741 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:21:22.733477 systemd-logind[1989]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:21:22.734664 systemd-logind[1989]: Removed session 6. Jul 10 00:21:22.761911 systemd[1]: Started sshd@6-172.31.26.174:22-139.178.89.65:50766.service - OpenSSH per-connection server daemon (139.178.89.65:50766). Jul 10 00:21:22.939577 sshd[2315]: Accepted publickey for core from 139.178.89.65 port 50766 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:22.940980 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:22.946484 systemd-logind[1989]: New session 7 of user core. Jul 10 00:21:22.955414 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:21:23.104580 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:21:23.104855 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:23.116841 sudo[2318]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:23.140043 sshd[2317]: Connection closed by 139.178.89.65 port 50766 Jul 10 00:21:23.140745 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:23.144228 systemd[1]: sshd@6-172.31.26.174:22-139.178.89.65:50766.service: Deactivated successfully. Jul 10 00:21:23.146002 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:21:23.148083 systemd-logind[1989]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:21:23.149392 systemd-logind[1989]: Removed session 7. Jul 10 00:21:23.169375 systemd[1]: Started sshd@7-172.31.26.174:22-139.178.89.65:50776.service - OpenSSH per-connection server daemon (139.178.89.65:50776). Jul 10 00:21:23.342755 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 50776 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:23.344341 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:23.349394 systemd-logind[1989]: New session 8 of user core. Jul 10 00:21:23.366407 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:21:23.461027 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:21:23.461335 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:23.466243 sudo[2328]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:23.472130 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:21:23.472595 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:23.482717 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:23.524441 augenrules[2350]: No rules Jul 10 00:21:23.525840 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:23.526097 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:23.527332 sudo[2327]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:23.549632 sshd[2326]: Connection closed by 139.178.89.65 port 50776 Jul 10 00:21:23.550150 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:23.553151 systemd[1]: sshd@7-172.31.26.174:22-139.178.89.65:50776.service: Deactivated successfully. Jul 10 00:21:23.554750 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:21:23.556033 systemd-logind[1989]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:21:23.557301 systemd-logind[1989]: Removed session 8. Jul 10 00:21:23.582525 systemd[1]: Started sshd@8-172.31.26.174:22-139.178.89.65:50790.service - OpenSSH per-connection server daemon (139.178.89.65:50790). Jul 10 00:21:23.763525 sshd[2359]: Accepted publickey for core from 139.178.89.65 port 50790 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:21:23.764854 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:23.770202 systemd-logind[1989]: New session 9 of user core. Jul 10 00:21:23.778392 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:21:23.873946 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:21:23.874229 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:24.951358 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:21:24.961577 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:21:25.368190 dockerd[2381]: time="2025-07-10T00:21:25.368077552Z" level=info msg="Starting up" Jul 10 00:21:25.371501 dockerd[2381]: time="2025-07-10T00:21:25.371465982Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:21:25.430621 dockerd[2381]: time="2025-07-10T00:21:25.430574169Z" level=info msg="Loading containers: start." Jul 10 00:21:25.442180 kernel: Initializing XFRM netlink socket Jul 10 00:21:25.680799 (udev-worker)[2401]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:21:25.721350 systemd-networkd[1734]: docker0: Link UP Jul 10 00:21:25.725830 dockerd[2381]: time="2025-07-10T00:21:25.725779198Z" level=info msg="Loading containers: done." Jul 10 00:21:25.744357 dockerd[2381]: time="2025-07-10T00:21:25.744308202Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:21:25.744523 dockerd[2381]: time="2025-07-10T00:21:25.744397305Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:21:25.744523 dockerd[2381]: time="2025-07-10T00:21:25.744512452Z" level=info msg="Initializing buildkit" Jul 10 00:21:25.782200 dockerd[2381]: time="2025-07-10T00:21:25.782144569Z" level=info msg="Completed buildkit initialization" Jul 10 00:21:25.789578 dockerd[2381]: time="2025-07-10T00:21:25.789511421Z" level=info msg="Daemon has completed initialization" Jul 10 00:21:25.790387 dockerd[2381]: time="2025-07-10T00:21:25.789648007Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:21:25.789726 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:21:26.102935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:26.105510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:26.421305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:26.429567 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:26.475030 kubelet[2588]: E0710 00:21:26.474901 2588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:26.479014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:26.479204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:26.479525 systemd[1]: kubelet.service: Consumed 171ms CPU time, 108.7M memory peak. Jul 10 00:21:27.368083 containerd[2015]: time="2025-07-10T00:21:27.368048653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:21:27.999439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218033823.mount: Deactivated successfully. Jul 10 00:21:29.617927 containerd[2015]: time="2025-07-10T00:21:29.617876387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.619170 containerd[2015]: time="2025-07-10T00:21:29.619108715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 10 00:21:29.620146 containerd[2015]: time="2025-07-10T00:21:29.620092007Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.622551 containerd[2015]: time="2025-07-10T00:21:29.622486804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.623690 containerd[2015]: time="2025-07-10T00:21:29.623499550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.255414394s" Jul 10 00:21:29.623690 containerd[2015]: time="2025-07-10T00:21:29.623540749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 00:21:29.624365 containerd[2015]: time="2025-07-10T00:21:29.624259958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:21:31.454175 containerd[2015]: time="2025-07-10T00:21:31.454109969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.455210 containerd[2015]: time="2025-07-10T00:21:31.455149891Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 10 00:21:31.456311 containerd[2015]: time="2025-07-10T00:21:31.456259025Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.458593 containerd[2015]: time="2025-07-10T00:21:31.458560502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.459728 containerd[2015]: time="2025-07-10T00:21:31.459590395Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.835295184s" Jul 10 00:21:31.459728 containerd[2015]: time="2025-07-10T00:21:31.459630459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 00:21:31.460359 containerd[2015]: time="2025-07-10T00:21:31.460266980Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:21:32.879305 containerd[2015]: time="2025-07-10T00:21:32.879258925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.880213 containerd[2015]: time="2025-07-10T00:21:32.880175655Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 10 00:21:32.881330 containerd[2015]: time="2025-07-10T00:21:32.881259359Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.884373 containerd[2015]: time="2025-07-10T00:21:32.884302634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.886258 containerd[2015]: time="2025-07-10T00:21:32.885338370Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.424882947s" Jul 10 00:21:32.886258 containerd[2015]: time="2025-07-10T00:21:32.885377573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 00:21:32.886400 containerd[2015]: time="2025-07-10T00:21:32.886284334Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:21:33.950993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886844801.mount: Deactivated successfully. Jul 10 00:21:34.503179 containerd[2015]: time="2025-07-10T00:21:34.503107315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.503893 containerd[2015]: time="2025-07-10T00:21:34.503753658Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 10 00:21:34.505432 containerd[2015]: time="2025-07-10T00:21:34.504852994Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.506888 containerd[2015]: time="2025-07-10T00:21:34.506858664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.507404 containerd[2015]: time="2025-07-10T00:21:34.507369920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.621037436s" Jul 10 00:21:34.507404 containerd[2015]: time="2025-07-10T00:21:34.507403174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 00:21:34.508061 containerd[2015]: time="2025-07-10T00:21:34.508032284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:21:34.997837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97771428.mount: Deactivated successfully. Jul 10 00:21:36.051465 containerd[2015]: time="2025-07-10T00:21:36.051409571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:36.055182 containerd[2015]: time="2025-07-10T00:21:36.055032507Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:36.055182 containerd[2015]: time="2025-07-10T00:21:36.055112510Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:21:36.060923 containerd[2015]: time="2025-07-10T00:21:36.060875149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:36.062044 containerd[2015]: time="2025-07-10T00:21:36.061849326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.55378548s" Jul 10 00:21:36.062044 containerd[2015]: time="2025-07-10T00:21:36.061888455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:21:36.062813 containerd[2015]: time="2025-07-10T00:21:36.062669376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:21:36.487967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249498041.mount: Deactivated successfully. Jul 10 00:21:36.489792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:21:36.491752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:36.497184 containerd[2015]: time="2025-07-10T00:21:36.496789019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:36.497929 containerd[2015]: time="2025-07-10T00:21:36.497895306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:21:36.499283 containerd[2015]: time="2025-07-10T00:21:36.499247736Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:36.501250 containerd[2015]: time="2025-07-10T00:21:36.501222451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:36.502049 containerd[2015]: time="2025-07-10T00:21:36.501743892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.033197ms" Jul 10 00:21:36.502049 containerd[2015]: time="2025-07-10T00:21:36.501779631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:21:36.502275 containerd[2015]: time="2025-07-10T00:21:36.502133075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:21:36.746302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:36.763659 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:36.831497 kubelet[2730]: E0710 00:21:36.831447 2730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:36.836486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:36.836686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:36.837228 systemd[1]: kubelet.service: Consumed 177ms CPU time, 107.9M memory peak. Jul 10 00:21:36.970885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302181273.mount: Deactivated successfully. Jul 10 00:21:39.272000 containerd[2015]: time="2025-07-10T00:21:39.271939885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:39.273177 containerd[2015]: time="2025-07-10T00:21:39.273122495Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 10 00:21:39.274975 containerd[2015]: time="2025-07-10T00:21:39.274875000Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:39.277976 containerd[2015]: time="2025-07-10T00:21:39.277933536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:39.279125 containerd[2015]: time="2025-07-10T00:21:39.279047007Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.776869881s" Jul 10 00:21:39.279125 containerd[2015]: time="2025-07-10T00:21:39.279083970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 00:21:39.965559 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 10 00:21:41.662299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:41.662568 systemd[1]: kubelet.service: Consumed 177ms CPU time, 107.9M memory peak. Jul 10 00:21:41.665401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:41.700006 systemd[1]: Reload requested from client PID 2821 ('systemctl') (unit session-9.scope)... Jul 10 00:21:41.700028 systemd[1]: Reloading... Jul 10 00:21:41.847434 zram_generator::config[2868]: No configuration found. Jul 10 00:21:41.991917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:42.125439 systemd[1]: Reloading finished in 424 ms. Jul 10 00:21:42.182764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:21:42.182886 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:21:42.183373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:42.183439 systemd[1]: kubelet.service: Consumed 142ms CPU time, 97.8M memory peak. Jul 10 00:21:42.185650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:42.439452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:42.448682 (kubelet)[2928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:42.516680 kubelet[2928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:42.516979 kubelet[2928]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:42.516979 kubelet[2928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:42.516979 kubelet[2928]: I0710 00:21:42.516826 2928 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:42.839932 kubelet[2928]: I0710 00:21:42.839800 2928 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:21:42.839932 kubelet[2928]: I0710 00:21:42.839836 2928 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:42.840511 kubelet[2928]: I0710 00:21:42.840470 2928 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:21:42.901558 kubelet[2928]: I0710 00:21:42.900821 2928 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:42.908482 kubelet[2928]: E0710 00:21:42.908437 2928 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:42.926089 kubelet[2928]: I0710 00:21:42.926050 2928 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:42.933977 kubelet[2928]: I0710 00:21:42.933944 2928 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:42.936264 kubelet[2928]: I0710 00:21:42.936205 2928 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:42.936431 kubelet[2928]: I0710 00:21:42.936252 2928 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:42.939343 kubelet[2928]: I0710 00:21:42.939301 2928 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:42.939343 kubelet[2928]: I0710 00:21:42.939333 2928 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:21:42.941041 kubelet[2928]: I0710 00:21:42.940924 2928 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:42.946215 kubelet[2928]: I0710 00:21:42.946181 2928 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:21:42.946215 kubelet[2928]: I0710 00:21:42.946217 2928 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:42.948592 kubelet[2928]: I0710 00:21:42.948551 2928 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:21:42.948592 kubelet[2928]: I0710 00:21:42.948583 2928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:42.956710 kubelet[2928]: W0710 00:21:42.956142 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:42.956710 kubelet[2928]: E0710 00:21:42.956217 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:42.956710 kubelet[2928]: W0710 00:21:42.956624 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:42.956710 kubelet[2928]: E0710 00:21:42.956675 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:42.959031 kubelet[2928]: I0710 00:21:42.959000 2928 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:42.964210 kubelet[2928]: I0710 00:21:42.964186 2928 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:42.964326 kubelet[2928]: W0710 00:21:42.964264 2928 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:21:42.965276 kubelet[2928]: I0710 00:21:42.965257 2928 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:21:42.965348 kubelet[2928]: I0710 00:21:42.965301 2928 server.go:1287] "Started kubelet" Jul 10 00:21:42.969589 kubelet[2928]: I0710 00:21:42.969534 2928 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:42.977875 kubelet[2928]: I0710 00:21:42.977445 2928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:42.977875 kubelet[2928]: I0710 00:21:42.977810 2928 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:42.978011 kubelet[2928]: I0710 00:21:42.977978 2928 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:21:42.980753 kubelet[2928]: I0710 00:21:42.980259 2928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:42.988607 kubelet[2928]: I0710 00:21:42.988564 2928 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:42.990224 kubelet[2928]: I0710 00:21:42.989946 2928 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:21:42.990224 kubelet[2928]: E0710 00:21:42.990143 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:42.994233 kubelet[2928]: E0710 00:21:42.987818 2928 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.174:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.174:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-174.1850bbf1f7c145ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-174,UID:ip-172-31-26-174,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-174,},FirstTimestamp:2025-07-10 00:21:42.965274111 +0000 UTC m=+0.491062547,LastTimestamp:2025-07-10 00:21:42.965274111 +0000 UTC m=+0.491062547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-174,}" Jul 10 00:21:42.994233 kubelet[2928]: E0710 00:21:42.993110 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": dial tcp 172.31.26.174:6443: connect: connection refused" interval="200ms" Jul 10 00:21:42.994233 kubelet[2928]: I0710 00:21:42.993480 2928 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:21:42.994233 kubelet[2928]: I0710 00:21:42.993529 2928 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:42.995344 kubelet[2928]: I0710 00:21:42.995328 2928 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:42.995563 kubelet[2928]: I0710 00:21:42.995548 2928 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:43.000542 kubelet[2928]: W0710 00:21:42.999340 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:43.000542 kubelet[2928]: E0710 00:21:42.999388 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:43.002690 kubelet[2928]: I0710 00:21:43.002621 2928 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:43.009017 kubelet[2928]: I0710 00:21:43.008874 2928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:43.010241 kubelet[2928]: I0710 00:21:43.010221 2928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:43.010521 kubelet[2928]: I0710 00:21:43.010330 2928 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:21:43.010521 kubelet[2928]: I0710 00:21:43.010351 2928 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:21:43.010521 kubelet[2928]: I0710 00:21:43.010358 2928 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:21:43.010521 kubelet[2928]: E0710 00:21:43.010398 2928 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:43.021527 kubelet[2928]: W0710 00:21:43.021482 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:43.021788 kubelet[2928]: E0710 00:21:43.021736 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:43.021914 kubelet[2928]: E0710 00:21:43.021828 2928 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:43.030293 kubelet[2928]: I0710 00:21:43.030265 2928 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:21:43.030293 kubelet[2928]: I0710 00:21:43.030282 2928 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:43.030293 kubelet[2928]: I0710 00:21:43.030300 2928 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:43.033232 kubelet[2928]: I0710 00:21:43.032951 2928 policy_none.go:49] "None policy: Start" Jul 10 00:21:43.033232 kubelet[2928]: I0710 00:21:43.032976 2928 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:21:43.033232 kubelet[2928]: I0710 00:21:43.032991 2928 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:43.040019 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:21:43.054817 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:21:43.058113 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:21:43.073240 kubelet[2928]: I0710 00:21:43.073172 2928 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:43.073440 kubelet[2928]: I0710 00:21:43.073366 2928 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:43.073440 kubelet[2928]: I0710 00:21:43.073379 2928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:43.074307 kubelet[2928]: I0710 00:21:43.073869 2928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:43.075442 kubelet[2928]: E0710 00:21:43.075428 2928 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:21:43.075647 kubelet[2928]: E0710 00:21:43.075581 2928 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-174\" not found" Jul 10 00:21:43.123586 systemd[1]: Created slice kubepods-burstable-podbf1e12abc111840f7abebc1a5b0f8bc3.slice - libcontainer container kubepods-burstable-podbf1e12abc111840f7abebc1a5b0f8bc3.slice. Jul 10 00:21:43.142186 kubelet[2928]: E0710 00:21:43.141941 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:43.146721 systemd[1]: Created slice kubepods-burstable-pod19f475e0f6c8138409a4b23d02be53ae.slice - libcontainer container kubepods-burstable-pod19f475e0f6c8138409a4b23d02be53ae.slice. Jul 10 00:21:43.149810 kubelet[2928]: E0710 00:21:43.149592 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:43.151799 systemd[1]: Created slice kubepods-burstable-pod72780118b6cb374b7ecc704e542e6426.slice - libcontainer container kubepods-burstable-pod72780118b6cb374b7ecc704e542e6426.slice. Jul 10 00:21:43.153551 kubelet[2928]: E0710 00:21:43.153527 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:43.175364 kubelet[2928]: I0710 00:21:43.175322 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:43.175731 kubelet[2928]: E0710 00:21:43.175700 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.174:6443/api/v1/nodes\": dial tcp 172.31.26.174:6443: connect: connection refused" node="ip-172-31-26-174" Jul 10 00:21:43.194384 kubelet[2928]: E0710 00:21:43.194343 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": dial tcp 172.31.26.174:6443: connect: connection refused" interval="400ms" Jul 10 00:21:43.295016 kubelet[2928]: I0710 00:21:43.294828 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:43.295016 kubelet[2928]: I0710 00:21:43.294883 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:43.295016 kubelet[2928]: I0710 00:21:43.294981 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72780118b6cb374b7ecc704e542e6426-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-174\" (UID: \"72780118b6cb374b7ecc704e542e6426\") " pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:43.295016 kubelet[2928]: I0710 00:21:43.295002 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-ca-certs\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:43.295016 kubelet[2928]: I0710 00:21:43.295019 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:43.295366 kubelet[2928]: I0710 00:21:43.295037 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:43.295366 kubelet[2928]: I0710 00:21:43.295052 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:43.295366 kubelet[2928]: I0710 00:21:43.295065 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:43.295366 kubelet[2928]: I0710 00:21:43.295079 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:43.377550 kubelet[2928]: I0710 00:21:43.377459 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:43.377880 kubelet[2928]: E0710 00:21:43.377757 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.174:6443/api/v1/nodes\": dial tcp 172.31.26.174:6443: connect: connection refused" node="ip-172-31-26-174" Jul 10 00:21:43.443786 containerd[2015]: time="2025-07-10T00:21:43.443743832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-174,Uid:bf1e12abc111840f7abebc1a5b0f8bc3,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:43.461733 containerd[2015]: time="2025-07-10T00:21:43.461514908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-174,Uid:19f475e0f6c8138409a4b23d02be53ae,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:43.469120 containerd[2015]: time="2025-07-10T00:21:43.469003922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-174,Uid:72780118b6cb374b7ecc704e542e6426,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:43.596193 kubelet[2928]: E0710 00:21:43.595602 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": dial tcp 172.31.26.174:6443: connect: connection refused" interval="800ms" Jul 10 00:21:43.606489 containerd[2015]: time="2025-07-10T00:21:43.606239616Z" level=info msg="connecting to shim 44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0" address="unix:///run/containerd/s/75911ad9daa6fe2fbb856562fc86ad7a7b96d89df473909eb931c5b75e9d9f11" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:43.608110 containerd[2015]: time="2025-07-10T00:21:43.608069782Z" level=info msg="connecting to shim ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0" address="unix:///run/containerd/s/5cdda645adad131db007f66554f0faf6ca0802b7c237bc50539df38deec16de3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:43.619355 containerd[2015]: time="2025-07-10T00:21:43.619296485Z" level=info msg="connecting to shim 72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c" address="unix:///run/containerd/s/f2c8a8b95e7c6175ac1c785a6ae0d7a9700e4412a1e418b8180e5d4b728c2d8d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:43.746184 systemd[1]: Started cri-containerd-44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0.scope - libcontainer container 44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0. Jul 10 00:21:43.750423 systemd[1]: Started cri-containerd-72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c.scope - libcontainer container 72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c. Jul 10 00:21:43.752321 systemd[1]: Started cri-containerd-ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0.scope - libcontainer container ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0. Jul 10 00:21:43.784103 kubelet[2928]: I0710 00:21:43.783787 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:43.787042 kubelet[2928]: E0710 00:21:43.786978 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.174:6443/api/v1/nodes\": dial tcp 172.31.26.174:6443: connect: connection refused" node="ip-172-31-26-174" Jul 10 00:21:43.806499 kubelet[2928]: W0710 00:21:43.806252 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:43.807123 kubelet[2928]: E0710 00:21:43.806900 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:43.865182 kubelet[2928]: W0710 00:21:43.864274 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:43.865182 kubelet[2928]: E0710 00:21:43.864484 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:43.867209 containerd[2015]: time="2025-07-10T00:21:43.866836707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-174,Uid:72780118b6cb374b7ecc704e542e6426,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c\"" Jul 10 00:21:43.877252 containerd[2015]: time="2025-07-10T00:21:43.877017825Z" level=info msg="CreateContainer within sandbox \"72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:21:43.878858 containerd[2015]: time="2025-07-10T00:21:43.878475128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-174,Uid:bf1e12abc111840f7abebc1a5b0f8bc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0\"" Jul 10 00:21:43.883826 containerd[2015]: time="2025-07-10T00:21:43.883717255Z" level=info msg="CreateContainer within sandbox \"ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:21:43.887921 containerd[2015]: time="2025-07-10T00:21:43.887808747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-174,Uid:19f475e0f6c8138409a4b23d02be53ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0\"" Jul 10 00:21:43.892721 containerd[2015]: time="2025-07-10T00:21:43.892679882Z" level=info msg="CreateContainer within sandbox \"44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:21:43.910608 containerd[2015]: time="2025-07-10T00:21:43.910553457Z" level=info msg="Container 8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:43.917254 containerd[2015]: time="2025-07-10T00:21:43.917212437Z" level=info msg="Container f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:43.922766 kubelet[2928]: W0710 00:21:43.922710 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:43.922766 kubelet[2928]: E0710 00:21:43.922773 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:43.926476 containerd[2015]: time="2025-07-10T00:21:43.926434254Z" level=info msg="Container 95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:43.936528 containerd[2015]: time="2025-07-10T00:21:43.936362142Z" level=info msg="CreateContainer within sandbox \"72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\"" Jul 10 00:21:43.937016 containerd[2015]: time="2025-07-10T00:21:43.936961729Z" level=info msg="StartContainer for \"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\"" Jul 10 00:21:43.937996 containerd[2015]: time="2025-07-10T00:21:43.937945262Z" level=info msg="connecting to shim 8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62" address="unix:///run/containerd/s/f2c8a8b95e7c6175ac1c785a6ae0d7a9700e4412a1e418b8180e5d4b728c2d8d" protocol=ttrpc version=3 Jul 10 00:21:43.943636 containerd[2015]: time="2025-07-10T00:21:43.943515211Z" level=info msg="CreateContainer within sandbox \"44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\"" Jul 10 00:21:43.944190 containerd[2015]: time="2025-07-10T00:21:43.944146404Z" level=info msg="StartContainer for \"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\"" Jul 10 00:21:43.945655 containerd[2015]: time="2025-07-10T00:21:43.945629264Z" level=info msg="CreateContainer within sandbox \"ef6781539e37b4bd16ff980f281fd8c3570a2f2fc95c7aa24d2af9812f27cfb0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6\"" Jul 10 00:21:43.946478 containerd[2015]: time="2025-07-10T00:21:43.946434761Z" level=info msg="StartContainer for \"f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6\"" Jul 10 00:21:43.948143 containerd[2015]: time="2025-07-10T00:21:43.947998459Z" level=info msg="connecting to shim 95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3" address="unix:///run/containerd/s/75911ad9daa6fe2fbb856562fc86ad7a7b96d89df473909eb931c5b75e9d9f11" protocol=ttrpc version=3 Jul 10 00:21:43.949100 containerd[2015]: time="2025-07-10T00:21:43.949077815Z" level=info msg="connecting to shim f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6" address="unix:///run/containerd/s/5cdda645adad131db007f66554f0faf6ca0802b7c237bc50539df38deec16de3" protocol=ttrpc version=3 Jul 10 00:21:43.960478 systemd[1]: Started cri-containerd-8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62.scope - libcontainer container 8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62. Jul 10 00:21:43.976374 systemd[1]: Started cri-containerd-f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6.scope - libcontainer container f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6. Jul 10 00:21:43.985358 systemd[1]: Started cri-containerd-95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3.scope - libcontainer container 95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3. Jul 10 00:21:44.061240 containerd[2015]: time="2025-07-10T00:21:44.060896448Z" level=info msg="StartContainer for \"f4d9f394f274e5a040675b786f3fedb1443196d25d130c6f12e10c8e7ed425f6\" returns successfully" Jul 10 00:21:44.089268 containerd[2015]: time="2025-07-10T00:21:44.089212180Z" level=info msg="StartContainer for \"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\" returns successfully" Jul 10 00:21:44.094248 containerd[2015]: time="2025-07-10T00:21:44.094200533Z" level=info msg="StartContainer for \"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\" returns successfully" Jul 10 00:21:44.392064 kubelet[2928]: W0710 00:21:44.391982 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:44.392254 kubelet[2928]: E0710 00:21:44.392089 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:44.397187 kubelet[2928]: E0710 00:21:44.396773 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": dial tcp 172.31.26.174:6443: connect: connection refused" interval="1.6s" Jul 10 00:21:44.591361 kubelet[2928]: I0710 00:21:44.591332 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:44.591765 kubelet[2928]: E0710 00:21:44.591736 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.174:6443/api/v1/nodes\": dial tcp 172.31.26.174:6443: connect: connection refused" node="ip-172-31-26-174" Jul 10 00:21:45.017849 kubelet[2928]: E0710 00:21:45.017803 2928 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:45.054814 kubelet[2928]: E0710 00:21:45.054538 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:45.062384 kubelet[2928]: E0710 00:21:45.061780 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:45.065066 kubelet[2928]: E0710 00:21:45.065014 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:45.559419 kubelet[2928]: W0710 00:21:45.559370 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:45.559630 kubelet[2928]: E0710 00:21:45.559593 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-174&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:45.736970 kubelet[2928]: W0710 00:21:45.736929 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:45.737129 kubelet[2928]: E0710 00:21:45.736979 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:45.997711 kubelet[2928]: E0710 00:21:45.997669 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": dial tcp 172.31.26.174:6443: connect: connection refused" interval="3.2s" Jul 10 00:21:46.049474 kubelet[2928]: W0710 00:21:46.049434 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:46.049830 kubelet[2928]: E0710 00:21:46.049481 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:46.065503 kubelet[2928]: E0710 00:21:46.065349 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:46.065623 kubelet[2928]: E0710 00:21:46.065509 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:46.065735 kubelet[2928]: E0710 00:21:46.065717 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:46.193870 kubelet[2928]: I0710 00:21:46.193842 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:46.194238 kubelet[2928]: E0710 00:21:46.194204 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.174:6443/api/v1/nodes\": dial tcp 172.31.26.174:6443: connect: connection refused" node="ip-172-31-26-174" Jul 10 00:21:46.953074 kubelet[2928]: W0710 00:21:46.952861 2928 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.174:6443: connect: connection refused Jul 10 00:21:46.953074 kubelet[2928]: E0710 00:21:46.952922 2928 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.174:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:47.067225 kubelet[2928]: E0710 00:21:47.067044 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:47.067225 kubelet[2928]: E0710 00:21:47.067148 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:47.748629 kubelet[2928]: E0710 00:21:47.748580 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:49.396772 kubelet[2928]: I0710 00:21:49.396742 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:49.488578 kubelet[2928]: E0710 00:21:49.488314 2928 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:49.549989 kubelet[2928]: I0710 00:21:49.549945 2928 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-174" Jul 10 00:21:49.549989 kubelet[2928]: E0710 00:21:49.549984 2928 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-174\": node \"ip-172-31-26-174\" not found" Jul 10 00:21:49.559466 kubelet[2928]: E0710 00:21:49.559431 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:49.582967 kubelet[2928]: E0710 00:21:49.582684 2928 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-174.1850bbf1f7c145ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-174,UID:ip-172-31-26-174,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-174,},FirstTimestamp:2025-07-10 00:21:42.965274111 +0000 UTC m=+0.491062547,LastTimestamp:2025-07-10 00:21:42.965274111 +0000 UTC m=+0.491062547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-174,}" Jul 10 00:21:49.660348 kubelet[2928]: E0710 00:21:49.660226 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:49.715111 kubelet[2928]: E0710 00:21:49.715080 2928 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-174\" not found" node="ip-172-31-26-174" Jul 10 00:21:49.761258 kubelet[2928]: E0710 00:21:49.761212 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:49.861748 kubelet[2928]: E0710 00:21:49.861712 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:49.962453 kubelet[2928]: E0710 00:21:49.962330 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:50.062665 kubelet[2928]: E0710 00:21:50.062618 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:50.162774 kubelet[2928]: E0710 00:21:50.162724 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:50.291270 kubelet[2928]: I0710 00:21:50.291123 2928 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:50.296634 kubelet[2928]: E0710 00:21:50.296593 2928 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:50.296634 kubelet[2928]: I0710 00:21:50.296624 2928 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:50.298986 kubelet[2928]: E0710 00:21:50.298676 2928 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:50.298986 kubelet[2928]: I0710 00:21:50.298708 2928 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:50.300754 kubelet[2928]: E0710 00:21:50.300721 2928 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:50.961255 kubelet[2928]: I0710 00:21:50.961218 2928 apiserver.go:52] "Watching apiserver" Jul 10 00:21:50.993640 kubelet[2928]: I0710 00:21:50.993599 2928 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:21:51.413927 systemd[1]: Reload requested from client PID 3205 ('systemctl') (unit session-9.scope)... Jul 10 00:21:51.413945 systemd[1]: Reloading... Jul 10 00:21:51.514191 zram_generator::config[3248]: No configuration found. Jul 10 00:21:51.644119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:51.795074 systemd[1]: Reloading finished in 380 ms. Jul 10 00:21:51.825240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:51.841380 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:21:51.841602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:51.841662 systemd[1]: kubelet.service: Consumed 885ms CPU time, 128.4M memory peak. Jul 10 00:21:51.843635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:52.103564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:52.116740 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:52.166821 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:52.166821 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:52.166821 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:52.166821 kubelet[3309]: I0710 00:21:52.166733 3309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:52.172800 kubelet[3309]: I0710 00:21:52.172765 3309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:21:52.172800 kubelet[3309]: I0710 00:21:52.172788 3309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:52.173035 kubelet[3309]: I0710 00:21:52.173019 3309 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:21:52.174148 kubelet[3309]: I0710 00:21:52.174128 3309 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:21:52.176756 kubelet[3309]: I0710 00:21:52.176169 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:52.179894 kubelet[3309]: I0710 00:21:52.179873 3309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:52.184178 kubelet[3309]: I0710 00:21:52.183201 3309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:52.184178 kubelet[3309]: I0710 00:21:52.183395 3309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:52.184178 kubelet[3309]: I0710 00:21:52.183423 3309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:52.184178 kubelet[3309]: I0710 00:21:52.183644 3309 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183654 3309 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183698 3309 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183811 3309 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183832 3309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183853 3309 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:21:52.184386 kubelet[3309]: I0710 00:21:52.183862 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:52.185446 kubelet[3309]: I0710 00:21:52.185409 3309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:52.185812 kubelet[3309]: I0710 00:21:52.185776 3309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:52.186188 kubelet[3309]: I0710 00:21:52.186130 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:21:52.186377 kubelet[3309]: I0710 00:21:52.186361 3309 server.go:1287] "Started kubelet" Jul 10 00:21:52.188289 kubelet[3309]: I0710 00:21:52.188273 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:52.197627 kubelet[3309]: I0710 00:21:52.197593 3309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:52.198613 kubelet[3309]: I0710 00:21:52.198595 3309 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:21:52.200823 kubelet[3309]: I0710 00:21:52.199780 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:52.201381 kubelet[3309]: I0710 00:21:52.201361 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:52.201920 kubelet[3309]: I0710 00:21:52.201680 3309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:52.202302 kubelet[3309]: I0710 00:21:52.202098 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:21:52.203059 kubelet[3309]: E0710 00:21:52.202688 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-174\" not found" Jul 10 00:21:52.207499 kubelet[3309]: I0710 00:21:52.207474 3309 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:52.207589 kubelet[3309]: I0710 00:21:52.207571 3309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:52.208282 kubelet[3309]: E0710 00:21:52.208236 3309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:52.208418 kubelet[3309]: I0710 00:21:52.208408 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:21:52.208594 kubelet[3309]: I0710 00:21:52.208586 3309 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:52.213069 kubelet[3309]: I0710 00:21:52.213046 3309 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:52.213205 kubelet[3309]: I0710 00:21:52.213056 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:52.216614 kubelet[3309]: I0710 00:21:52.216590 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:52.216859 kubelet[3309]: I0710 00:21:52.216783 3309 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:21:52.217118 kubelet[3309]: I0710 00:21:52.217018 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:21:52.217409 kubelet[3309]: I0710 00:21:52.217323 3309 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:21:52.217603 kubelet[3309]: E0710 00:21:52.217583 3309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:52.276809 kubelet[3309]: I0710 00:21:52.276781 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:21:52.276809 kubelet[3309]: I0710 00:21:52.276795 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:52.276963 kubelet[3309]: I0710 00:21:52.276821 3309 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:52.276990 kubelet[3309]: I0710 00:21:52.276981 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:21:52.277013 kubelet[3309]: I0710 00:21:52.276990 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:21:52.277013 kubelet[3309]: I0710 00:21:52.277007 3309 policy_none.go:49] "None policy: Start" Jul 10 00:21:52.277057 kubelet[3309]: I0710 00:21:52.277016 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:21:52.277057 kubelet[3309]: I0710 00:21:52.277025 3309 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:52.277134 kubelet[3309]: I0710 00:21:52.277119 3309 state_mem.go:75] "Updated machine memory state" Jul 10 00:21:52.282311 kubelet[3309]: I0710 00:21:52.281732 3309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:52.282311 kubelet[3309]: I0710 00:21:52.281879 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:52.282311 kubelet[3309]: I0710 00:21:52.281889 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:52.282311 kubelet[3309]: I0710 00:21:52.282224 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:52.285486 kubelet[3309]: E0710 00:21:52.285462 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:21:52.318996 kubelet[3309]: I0710 00:21:52.318876 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:52.321211 kubelet[3309]: I0710 00:21:52.321184 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:52.321498 kubelet[3309]: I0710 00:21:52.321404 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:52.385385 kubelet[3309]: I0710 00:21:52.385107 3309 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-174" Jul 10 00:21:52.393859 kubelet[3309]: I0710 00:21:52.393828 3309 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-174" Jul 10 00:21:52.394056 kubelet[3309]: I0710 00:21:52.393921 3309 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-174" Jul 10 00:21:52.409647 kubelet[3309]: I0710 00:21:52.409607 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:52.410176 kubelet[3309]: I0710 00:21:52.410097 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72780118b6cb374b7ecc704e542e6426-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-174\" (UID: \"72780118b6cb374b7ecc704e542e6426\") " pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:52.410365 kubelet[3309]: I0710 00:21:52.410318 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-ca-certs\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:52.410492 kubelet[3309]: I0710 00:21:52.410462 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf1e12abc111840f7abebc1a5b0f8bc3-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-174\" (UID: \"bf1e12abc111840f7abebc1a5b0f8bc3\") " pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:52.435808 sudo[3339]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:21:52.436096 sudo[3339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:21:52.511010 kubelet[3309]: I0710 00:21:52.510691 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:52.511010 kubelet[3309]: I0710 00:21:52.510737 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:52.511010 kubelet[3309]: I0710 00:21:52.510762 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:52.511010 kubelet[3309]: I0710 00:21:52.510831 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:52.511010 kubelet[3309]: I0710 00:21:52.510932 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19f475e0f6c8138409a4b23d02be53ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-174\" (UID: \"19f475e0f6c8138409a4b23d02be53ae\") " pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:53.061637 sudo[3339]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:53.185601 kubelet[3309]: I0710 00:21:53.185562 3309 apiserver.go:52] "Watching apiserver" Jul 10 00:21:53.208816 kubelet[3309]: I0710 00:21:53.208777 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:21:53.257136 kubelet[3309]: I0710 00:21:53.257105 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:53.258428 kubelet[3309]: I0710 00:21:53.258401 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:53.258798 kubelet[3309]: I0710 00:21:53.258779 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:53.275574 kubelet[3309]: E0710 00:21:53.275532 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-174\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-174" Jul 10 00:21:53.277991 kubelet[3309]: E0710 00:21:53.277959 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-174\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-174" Jul 10 00:21:53.280467 kubelet[3309]: E0710 00:21:53.280439 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-174\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-174" Jul 10 00:21:53.290869 kubelet[3309]: I0710 00:21:53.290795 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-174" podStartSLOduration=1.290775028 podStartE2EDuration="1.290775028s" podCreationTimestamp="2025-07-10 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:53.274819344 +0000 UTC m=+1.152428790" watchObservedRunningTime="2025-07-10 00:21:53.290775028 +0000 UTC m=+1.168384447" Jul 10 00:21:53.301709 kubelet[3309]: I0710 00:21:53.301648 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-174" podStartSLOduration=1.301628513 podStartE2EDuration="1.301628513s" podCreationTimestamp="2025-07-10 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:53.29167796 +0000 UTC m=+1.169287404" watchObservedRunningTime="2025-07-10 00:21:53.301628513 +0000 UTC m=+1.179237934" Jul 10 00:21:53.315863 kubelet[3309]: I0710 00:21:53.315731 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-174" podStartSLOduration=1.315696468 podStartE2EDuration="1.315696468s" podCreationTimestamp="2025-07-10 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:53.302388974 +0000 UTC m=+1.179998400" watchObservedRunningTime="2025-07-10 00:21:53.315696468 +0000 UTC m=+1.193305891" Jul 10 00:21:54.421237 update_engine[1991]: I20250710 00:21:54.420650 1991 update_attempter.cc:509] Updating boot flags... Jul 10 00:21:55.453144 sudo[2362]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:55.478863 sshd[2361]: Connection closed by 139.178.89.65 port 50790 Jul 10 00:21:55.482311 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:55.490591 systemd[1]: sshd@8-172.31.26.174:22-139.178.89.65:50790.service: Deactivated successfully. Jul 10 00:21:55.495013 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:21:55.495871 systemd[1]: session-9.scope: Consumed 4.703s CPU time, 207.1M memory peak. Jul 10 00:21:55.506313 systemd-logind[1989]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:21:55.508563 systemd-logind[1989]: Removed session 9. Jul 10 00:21:57.610020 kubelet[3309]: I0710 00:21:57.609987 3309 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:21:57.611299 kubelet[3309]: I0710 00:21:57.610543 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:21:57.611362 containerd[2015]: time="2025-07-10T00:21:57.610345703Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:21:58.389180 systemd[1]: Created slice kubepods-besteffort-podee486689_67df_4cf1_ae66_c56ab5975dc5.slice - libcontainer container kubepods-besteffort-podee486689_67df_4cf1_ae66_c56ab5975dc5.slice. Jul 10 00:21:58.400629 systemd[1]: Created slice kubepods-burstable-pod3e054203_3879_496d_b601_3f8aa77e7cab.slice - libcontainer container kubepods-burstable-pod3e054203_3879_496d_b601_3f8aa77e7cab.slice. Jul 10 00:21:58.463938 kubelet[3309]: I0710 00:21:58.463876 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2rf\" (UniqueName: \"kubernetes.io/projected/ee486689-67df-4cf1-ae66-c56ab5975dc5-kube-api-access-ms2rf\") pod \"kube-proxy-fcpjl\" (UID: \"ee486689-67df-4cf1-ae66-c56ab5975dc5\") " pod="kube-system/kube-proxy-fcpjl" Jul 10 00:21:58.463938 kubelet[3309]: I0710 00:21:58.463921 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-net\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.463938 kubelet[3309]: I0710 00:21:58.463941 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-bpf-maps\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.463956 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-config-path\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.463970 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-hubble-tls\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.463985 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-xtables-lock\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.463999 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-cgroup\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.464013 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-lib-modules\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464481 kubelet[3309]: I0710 00:21:58.464074 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-run\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464116 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-etc-cni-netd\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464136 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee486689-67df-4cf1-ae66-c56ab5975dc5-kube-proxy\") pod \"kube-proxy-fcpjl\" (UID: \"ee486689-67df-4cf1-ae66-c56ab5975dc5\") " pod="kube-system/kube-proxy-fcpjl" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464151 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cni-path\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464190 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-kernel\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464205 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee486689-67df-4cf1-ae66-c56ab5975dc5-lib-modules\") pod \"kube-proxy-fcpjl\" (UID: \"ee486689-67df-4cf1-ae66-c56ab5975dc5\") " pod="kube-system/kube-proxy-fcpjl" Jul 10 00:21:58.464631 kubelet[3309]: I0710 00:21:58.464223 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-hostproc\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464773 kubelet[3309]: I0710 00:21:58.464248 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e054203-3879-496d-b601-3f8aa77e7cab-clustermesh-secrets\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.464773 kubelet[3309]: I0710 00:21:58.464265 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee486689-67df-4cf1-ae66-c56ab5975dc5-xtables-lock\") pod \"kube-proxy-fcpjl\" (UID: \"ee486689-67df-4cf1-ae66-c56ab5975dc5\") " pod="kube-system/kube-proxy-fcpjl" Jul 10 00:21:58.464773 kubelet[3309]: I0710 00:21:58.464280 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsrg8\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-kube-api-access-bsrg8\") pod \"cilium-npqq6\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " pod="kube-system/cilium-npqq6" Jul 10 00:21:58.539668 systemd[1]: Created slice kubepods-besteffort-pod4fd23dc4_5134_4024_bfc1_846d29b52788.slice - libcontainer container kubepods-besteffort-pod4fd23dc4_5134_4024_bfc1_846d29b52788.slice. Jul 10 00:21:58.565029 kubelet[3309]: I0710 00:21:58.564991 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fd23dc4-5134-4024-bfc1-846d29b52788-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-czts5\" (UID: \"4fd23dc4-5134-4024-bfc1-846d29b52788\") " pod="kube-system/cilium-operator-6c4d7847fc-czts5" Jul 10 00:21:58.565706 kubelet[3309]: I0710 00:21:58.565142 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wdb6\" (UniqueName: \"kubernetes.io/projected/4fd23dc4-5134-4024-bfc1-846d29b52788-kube-api-access-4wdb6\") pod \"cilium-operator-6c4d7847fc-czts5\" (UID: \"4fd23dc4-5134-4024-bfc1-846d29b52788\") " pod="kube-system/cilium-operator-6c4d7847fc-czts5" Jul 10 00:21:58.700491 containerd[2015]: time="2025-07-10T00:21:58.700372443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcpjl,Uid:ee486689-67df-4cf1-ae66-c56ab5975dc5,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:58.704151 containerd[2015]: time="2025-07-10T00:21:58.704110080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npqq6,Uid:3e054203-3879-496d-b601-3f8aa77e7cab,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:58.755192 containerd[2015]: time="2025-07-10T00:21:58.755006051Z" level=info msg="connecting to shim 8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a" address="unix:///run/containerd/s/c8b67fd2896e396c1e798df879b472407e28317e67086f4d4c2c0a655a3362cc" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:58.758396 containerd[2015]: time="2025-07-10T00:21:58.758353529Z" level=info msg="connecting to shim 1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:58.780385 systemd[1]: Started cri-containerd-1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3.scope - libcontainer container 1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3. Jul 10 00:21:58.782796 systemd[1]: Started cri-containerd-8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a.scope - libcontainer container 8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a. Jul 10 00:21:58.822286 containerd[2015]: time="2025-07-10T00:21:58.822196816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npqq6,Uid:3e054203-3879-496d-b601-3f8aa77e7cab,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\"" Jul 10 00:21:58.824960 containerd[2015]: time="2025-07-10T00:21:58.824564425Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:21:58.831790 containerd[2015]: time="2025-07-10T00:21:58.831740157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcpjl,Uid:ee486689-67df-4cf1-ae66-c56ab5975dc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a\"" Jul 10 00:21:58.835241 containerd[2015]: time="2025-07-10T00:21:58.834450021Z" level=info msg="CreateContainer within sandbox \"8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:21:58.851306 containerd[2015]: time="2025-07-10T00:21:58.851262024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-czts5,Uid:4fd23dc4-5134-4024-bfc1-846d29b52788,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:58.895956 containerd[2015]: time="2025-07-10T00:21:58.895912128Z" level=info msg="Container 6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:58.915769 containerd[2015]: time="2025-07-10T00:21:58.915729908Z" level=info msg="CreateContainer within sandbox \"8a6934ecd9815bf88a5b9f521c5440ab904996d1e66852672c3c84c05cf6bc7a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a\"" Jul 10 00:21:58.917378 containerd[2015]: time="2025-07-10T00:21:58.916309681Z" level=info msg="StartContainer for \"6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a\"" Jul 10 00:21:58.918473 containerd[2015]: time="2025-07-10T00:21:58.918441184Z" level=info msg="connecting to shim 6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a" address="unix:///run/containerd/s/c8b67fd2896e396c1e798df879b472407e28317e67086f4d4c2c0a655a3362cc" protocol=ttrpc version=3 Jul 10 00:21:58.925662 containerd[2015]: time="2025-07-10T00:21:58.925623742Z" level=info msg="connecting to shim add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3" address="unix:///run/containerd/s/8743c5c7e992ae9ab0796452c077449e33311d96107f1f70405b10a8b8460c17" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:58.943525 systemd[1]: Started cri-containerd-6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a.scope - libcontainer container 6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a. Jul 10 00:21:58.951325 systemd[1]: Started cri-containerd-add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3.scope - libcontainer container add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3. Jul 10 00:21:58.998240 containerd[2015]: time="2025-07-10T00:21:58.998202103Z" level=info msg="StartContainer for \"6f9d6fe3f3eb51daa6a6003f4d397f8e6098ebd76c0da9542a7c5cd96d0e174a\" returns successfully" Jul 10 00:21:59.018229 containerd[2015]: time="2025-07-10T00:21:59.018187337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-czts5,Uid:4fd23dc4-5134-4024-bfc1-846d29b52788,Namespace:kube-system,Attempt:0,} returns sandbox id \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\"" Jul 10 00:22:01.906588 kubelet[3309]: I0710 00:22:01.906509 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fcpjl" podStartSLOduration=3.906311922 podStartE2EDuration="3.906311922s" podCreationTimestamp="2025-07-10 00:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:59.285697582 +0000 UTC m=+7.163307005" watchObservedRunningTime="2025-07-10 00:22:01.906311922 +0000 UTC m=+9.783921328" Jul 10 00:22:09.157151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591718182.mount: Deactivated successfully. Jul 10 00:22:11.209878 containerd[2015]: time="2025-07-10T00:22:11.209813250Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:11.211113 containerd[2015]: time="2025-07-10T00:22:11.211077275Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:22:11.212622 containerd[2015]: time="2025-07-10T00:22:11.212564267Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:11.214006 containerd[2015]: time="2025-07-10T00:22:11.213706276Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.389092749s" Jul 10 00:22:11.214006 containerd[2015]: time="2025-07-10T00:22:11.213742463Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:22:11.215792 containerd[2015]: time="2025-07-10T00:22:11.215765493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:22:11.219035 containerd[2015]: time="2025-07-10T00:22:11.218990929Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:22:11.253182 containerd[2015]: time="2025-07-10T00:22:11.251688481Z" level=info msg="Container c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:11.272852 containerd[2015]: time="2025-07-10T00:22:11.272799843Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\"" Jul 10 00:22:11.273714 containerd[2015]: time="2025-07-10T00:22:11.273601146Z" level=info msg="StartContainer for \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\"" Jul 10 00:22:11.276101 containerd[2015]: time="2025-07-10T00:22:11.276054193Z" level=info msg="connecting to shim c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" protocol=ttrpc version=3 Jul 10 00:22:11.390627 systemd[1]: Started cri-containerd-c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5.scope - libcontainer container c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5. Jul 10 00:22:11.427384 containerd[2015]: time="2025-07-10T00:22:11.427268871Z" level=info msg="StartContainer for \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" returns successfully" Jul 10 00:22:11.440612 systemd[1]: cri-containerd-c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5.scope: Deactivated successfully. Jul 10 00:22:11.515137 containerd[2015]: time="2025-07-10T00:22:11.514625760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" id:\"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" pid:3993 exited_at:{seconds:1752106931 nanos:442990383}" Jul 10 00:22:11.516107 containerd[2015]: time="2025-07-10T00:22:11.516071815Z" level=info msg="received exit event container_id:\"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" id:\"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" pid:3993 exited_at:{seconds:1752106931 nanos:442990383}" Jul 10 00:22:11.549513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5-rootfs.mount: Deactivated successfully. Jul 10 00:22:12.333000 containerd[2015]: time="2025-07-10T00:22:12.332958404Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:22:12.345973 containerd[2015]: time="2025-07-10T00:22:12.345934371Z" level=info msg="Container 8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:12.354396 containerd[2015]: time="2025-07-10T00:22:12.354358623Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\"" Jul 10 00:22:12.355117 containerd[2015]: time="2025-07-10T00:22:12.355051022Z" level=info msg="StartContainer for \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\"" Jul 10 00:22:12.358193 containerd[2015]: time="2025-07-10T00:22:12.358038369Z" level=info msg="connecting to shim 8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" protocol=ttrpc version=3 Jul 10 00:22:12.385382 systemd[1]: Started cri-containerd-8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de.scope - libcontainer container 8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de. Jul 10 00:22:12.417085 containerd[2015]: time="2025-07-10T00:22:12.417048624Z" level=info msg="StartContainer for \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" returns successfully" Jul 10 00:22:12.432003 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:22:12.432741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:12.433445 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:12.434815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:12.437883 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:22:12.438780 systemd[1]: cri-containerd-8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de.scope: Deactivated successfully. Jul 10 00:22:12.444695 containerd[2015]: time="2025-07-10T00:22:12.444659126Z" level=info msg="received exit event container_id:\"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" id:\"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" pid:4037 exited_at:{seconds:1752106932 nanos:442996471}" Jul 10 00:22:12.445245 containerd[2015]: time="2025-07-10T00:22:12.444885910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" id:\"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" pid:4037 exited_at:{seconds:1752106932 nanos:442996471}" Jul 10 00:22:12.474072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:13.340274 containerd[2015]: time="2025-07-10T00:22:13.340225063Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:22:13.347996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de-rootfs.mount: Deactivated successfully. Jul 10 00:22:13.362683 containerd[2015]: time="2025-07-10T00:22:13.360403070Z" level=info msg="Container 90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:13.362403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447863999.mount: Deactivated successfully. Jul 10 00:22:13.380650 containerd[2015]: time="2025-07-10T00:22:13.380607187Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\"" Jul 10 00:22:13.381179 containerd[2015]: time="2025-07-10T00:22:13.381059830Z" level=info msg="StartContainer for \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\"" Jul 10 00:22:13.382802 containerd[2015]: time="2025-07-10T00:22:13.382753356Z" level=info msg="connecting to shim 90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" protocol=ttrpc version=3 Jul 10 00:22:13.405546 systemd[1]: Started cri-containerd-90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd.scope - libcontainer container 90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd. Jul 10 00:22:13.443532 systemd[1]: cri-containerd-90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd.scope: Deactivated successfully. Jul 10 00:22:13.445935 containerd[2015]: time="2025-07-10T00:22:13.445903631Z" level=info msg="StartContainer for \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" returns successfully" Jul 10 00:22:13.446267 containerd[2015]: time="2025-07-10T00:22:13.446247388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" id:\"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" pid:4091 exited_at:{seconds:1752106933 nanos:446001240}" Jul 10 00:22:13.446462 containerd[2015]: time="2025-07-10T00:22:13.446309552Z" level=info msg="received exit event container_id:\"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" id:\"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" pid:4091 exited_at:{seconds:1752106933 nanos:446001240}" Jul 10 00:22:13.467962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd-rootfs.mount: Deactivated successfully. Jul 10 00:22:14.341971 containerd[2015]: time="2025-07-10T00:22:14.341940335Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:22:14.353874 containerd[2015]: time="2025-07-10T00:22:14.353833804Z" level=info msg="Container 1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:14.357673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337528979.mount: Deactivated successfully. Jul 10 00:22:14.370376 containerd[2015]: time="2025-07-10T00:22:14.370341611Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\"" Jul 10 00:22:14.372182 containerd[2015]: time="2025-07-10T00:22:14.371512404Z" level=info msg="StartContainer for \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\"" Jul 10 00:22:14.372629 containerd[2015]: time="2025-07-10T00:22:14.372601899Z" level=info msg="connecting to shim 1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" protocol=ttrpc version=3 Jul 10 00:22:14.401384 systemd[1]: Started cri-containerd-1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2.scope - libcontainer container 1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2. Jul 10 00:22:14.429628 systemd[1]: cri-containerd-1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2.scope: Deactivated successfully. Jul 10 00:22:14.430891 containerd[2015]: time="2025-07-10T00:22:14.430855726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" id:\"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" pid:4137 exited_at:{seconds:1752106934 nanos:429982561}" Jul 10 00:22:14.431059 containerd[2015]: time="2025-07-10T00:22:14.430957140Z" level=info msg="received exit event container_id:\"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" id:\"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" pid:4137 exited_at:{seconds:1752106934 nanos:429982561}" Jul 10 00:22:14.438980 containerd[2015]: time="2025-07-10T00:22:14.438945235Z" level=info msg="StartContainer for \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" returns successfully" Jul 10 00:22:14.451719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2-rootfs.mount: Deactivated successfully. Jul 10 00:22:15.096715 containerd[2015]: time="2025-07-10T00:22:15.096661107Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:15.097627 containerd[2015]: time="2025-07-10T00:22:15.097586472Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:22:15.098565 containerd[2015]: time="2025-07-10T00:22:15.098504369Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:15.099982 containerd[2015]: time="2025-07-10T00:22:15.099794726Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.883995042s" Jul 10 00:22:15.099982 containerd[2015]: time="2025-07-10T00:22:15.099834602Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:22:15.102742 containerd[2015]: time="2025-07-10T00:22:15.102710194Z" level=info msg="CreateContainer within sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:22:15.119236 containerd[2015]: time="2025-07-10T00:22:15.119186108Z" level=info msg="Container 475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:15.126815 containerd[2015]: time="2025-07-10T00:22:15.126761012Z" level=info msg="CreateContainer within sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\"" Jul 10 00:22:15.127403 containerd[2015]: time="2025-07-10T00:22:15.127327754Z" level=info msg="StartContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\"" Jul 10 00:22:15.128674 containerd[2015]: time="2025-07-10T00:22:15.128600294Z" level=info msg="connecting to shim 475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b" address="unix:///run/containerd/s/8743c5c7e992ae9ab0796452c077449e33311d96107f1f70405b10a8b8460c17" protocol=ttrpc version=3 Jul 10 00:22:15.146334 systemd[1]: Started cri-containerd-475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b.scope - libcontainer container 475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b. Jul 10 00:22:15.180872 containerd[2015]: time="2025-07-10T00:22:15.180755778Z" level=info msg="StartContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" returns successfully" Jul 10 00:22:15.362340 containerd[2015]: time="2025-07-10T00:22:15.362293549Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:22:15.420095 containerd[2015]: time="2025-07-10T00:22:15.417904420Z" level=info msg="Container 435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:15.437323 containerd[2015]: time="2025-07-10T00:22:15.437279336Z" level=info msg="CreateContainer within sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\"" Jul 10 00:22:15.439292 containerd[2015]: time="2025-07-10T00:22:15.439258834Z" level=info msg="StartContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\"" Jul 10 00:22:15.440396 containerd[2015]: time="2025-07-10T00:22:15.440362821Z" level=info msg="connecting to shim 435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33" address="unix:///run/containerd/s/d64a929616b6a9fb3595ec92ebd519087a47b6234318e41937ecfe932f08b92c" protocol=ttrpc version=3 Jul 10 00:22:15.489546 systemd[1]: Started cri-containerd-435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33.scope - libcontainer container 435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33. Jul 10 00:22:15.518546 kubelet[3309]: I0710 00:22:15.518482 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-czts5" podStartSLOduration=1.437307683 podStartE2EDuration="17.518443704s" podCreationTimestamp="2025-07-10 00:21:58 +0000 UTC" firstStartedPulling="2025-07-10 00:21:59.019673717 +0000 UTC m=+6.897283134" lastFinishedPulling="2025-07-10 00:22:15.100809738 +0000 UTC m=+22.978419155" observedRunningTime="2025-07-10 00:22:15.399284917 +0000 UTC m=+23.276894343" watchObservedRunningTime="2025-07-10 00:22:15.518443704 +0000 UTC m=+23.396053128" Jul 10 00:22:15.618786 containerd[2015]: time="2025-07-10T00:22:15.618650895Z" level=info msg="StartContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" returns successfully" Jul 10 00:22:15.828615 containerd[2015]: time="2025-07-10T00:22:15.828568515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" id:\"a76fba0c0df465d651ffc21262e540b50fe80fc2f6acc457dc9c0d19e2ec7a13\" pid:4249 exited_at:{seconds:1752106935 nanos:827939960}" Jul 10 00:22:15.874250 kubelet[3309]: I0710 00:22:15.873639 3309 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:22:16.090934 systemd[1]: Created slice kubepods-burstable-podd1278637_ca9f_4dc4_8f1c_321c26e3d13d.slice - libcontainer container kubepods-burstable-podd1278637_ca9f_4dc4_8f1c_321c26e3d13d.slice. Jul 10 00:22:16.104423 systemd[1]: Created slice kubepods-burstable-pod198f58fd_9bb5_4a90_bf21_5a7c5ddf0f86.slice - libcontainer container kubepods-burstable-pod198f58fd_9bb5_4a90_bf21_5a7c5ddf0f86.slice. Jul 10 00:22:16.112118 kubelet[3309]: W0710 00:22:16.112039 3309 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-26-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-174' and this object Jul 10 00:22:16.112118 kubelet[3309]: E0710 00:22:16.112088 3309 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" logger="UnhandledError" Jul 10 00:22:16.113180 kubelet[3309]: I0710 00:22:16.112539 3309 status_manager.go:890] "Failed to get status for pod" podUID="d1278637-ca9f-4dc4-8f1c-321c26e3d13d" pod="kube-system/coredns-668d6bf9bc-mgx4m" err="pods \"coredns-668d6bf9bc-mgx4m\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" Jul 10 00:22:16.195887 kubelet[3309]: I0710 00:22:16.195691 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbgfw\" (UniqueName: \"kubernetes.io/projected/198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86-kube-api-access-jbgfw\") pod \"coredns-668d6bf9bc-s65wj\" (UID: \"198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86\") " pod="kube-system/coredns-668d6bf9bc-s65wj" Jul 10 00:22:16.196407 kubelet[3309]: I0710 00:22:16.196265 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1278637-ca9f-4dc4-8f1c-321c26e3d13d-config-volume\") pod \"coredns-668d6bf9bc-mgx4m\" (UID: \"d1278637-ca9f-4dc4-8f1c-321c26e3d13d\") " pod="kube-system/coredns-668d6bf9bc-mgx4m" Jul 10 00:22:16.196407 kubelet[3309]: I0710 00:22:16.196305 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsk8\" (UniqueName: \"kubernetes.io/projected/d1278637-ca9f-4dc4-8f1c-321c26e3d13d-kube-api-access-sdsk8\") pod \"coredns-668d6bf9bc-mgx4m\" (UID: \"d1278637-ca9f-4dc4-8f1c-321c26e3d13d\") " pod="kube-system/coredns-668d6bf9bc-mgx4m" Jul 10 00:22:16.196407 kubelet[3309]: I0710 00:22:16.196337 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86-config-volume\") pod \"coredns-668d6bf9bc-s65wj\" (UID: \"198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86\") " pod="kube-system/coredns-668d6bf9bc-s65wj" Jul 10 00:22:17.301810 containerd[2015]: time="2025-07-10T00:22:17.301752799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgx4m,Uid:d1278637-ca9f-4dc4-8f1c-321c26e3d13d,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:17.312561 containerd[2015]: time="2025-07-10T00:22:17.311150357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s65wj,Uid:198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:19.459457 (udev-worker)[4284]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:22:19.460480 systemd-networkd[1734]: cilium_host: Link UP Jul 10 00:22:19.461633 systemd-networkd[1734]: cilium_net: Link UP Jul 10 00:22:19.462280 systemd-networkd[1734]: cilium_net: Gained carrier Jul 10 00:22:19.462334 (udev-worker)[4343]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:22:19.462992 systemd-networkd[1734]: cilium_host: Gained carrier Jul 10 00:22:19.664335 systemd-networkd[1734]: cilium_vxlan: Link UP Jul 10 00:22:19.664344 systemd-networkd[1734]: cilium_vxlan: Gained carrier Jul 10 00:22:20.032420 systemd-networkd[1734]: cilium_host: Gained IPv6LL Jul 10 00:22:20.352368 systemd-networkd[1734]: cilium_net: Gained IPv6LL Jul 10 00:22:20.361192 kernel: NET: Registered PF_ALG protocol family Jul 10 00:22:21.048058 systemd-networkd[1734]: lxc_health: Link UP Jul 10 00:22:21.049476 systemd-networkd[1734]: lxc_health: Gained carrier Jul 10 00:22:21.050316 (udev-worker)[4351]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:22:21.248406 systemd-networkd[1734]: cilium_vxlan: Gained IPv6LL Jul 10 00:22:21.419970 systemd-networkd[1734]: lxc3ff68a2a33ac: Link UP Jul 10 00:22:21.420251 kernel: eth0: renamed from tmpea0e8 Jul 10 00:22:21.424219 systemd-networkd[1734]: lxc88827883f98f: Link UP Jul 10 00:22:21.427241 kernel: eth0: renamed from tmp53cbf Jul 10 00:22:21.426570 systemd-networkd[1734]: lxc3ff68a2a33ac: Gained carrier Jul 10 00:22:21.429662 systemd-networkd[1734]: lxc88827883f98f: Gained carrier Jul 10 00:22:22.528378 systemd-networkd[1734]: lxc3ff68a2a33ac: Gained IPv6LL Jul 10 00:22:22.592581 systemd-networkd[1734]: lxc_health: Gained IPv6LL Jul 10 00:22:22.745395 kubelet[3309]: I0710 00:22:22.744531 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-npqq6" podStartSLOduration=12.353605769 podStartE2EDuration="24.744498386s" podCreationTimestamp="2025-07-10 00:21:58 +0000 UTC" firstStartedPulling="2025-07-10 00:21:58.824053715 +0000 UTC m=+6.701663126" lastFinishedPulling="2025-07-10 00:22:11.214946326 +0000 UTC m=+19.092555743" observedRunningTime="2025-07-10 00:22:16.537110054 +0000 UTC m=+24.414719496" watchObservedRunningTime="2025-07-10 00:22:22.744498386 +0000 UTC m=+30.622107885" Jul 10 00:22:23.040475 systemd-networkd[1734]: lxc88827883f98f: Gained IPv6LL Jul 10 00:22:25.371643 ntpd[1973]: Listen normally on 7 cilium_host 192.168.0.196:123 Jul 10 00:22:25.373046 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 7 cilium_host 192.168.0.196:123 Jul 10 00:22:25.373046 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 8 cilium_net [fe80::6036:83ff:fe63:cbee%4]:123 Jul 10 00:22:25.373046 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 9 cilium_host [fe80::680a:feff:fe1d:67bc%5]:123 Jul 10 00:22:25.373046 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 10 cilium_vxlan [fe80::949a:37ff:feb8:1db4%6]:123 Jul 10 00:22:25.373046 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 11 lxc_health [fe80::1cc7:15ff:fe86:d047%8]:123 Jul 10 00:22:25.371737 ntpd[1973]: Listen normally on 8 cilium_net [fe80::6036:83ff:fe63:cbee%4]:123 Jul 10 00:22:25.373880 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 12 lxc3ff68a2a33ac [fe80::787d:81ff:feec:61b5%10]:123 Jul 10 00:22:25.373880 ntpd[1973]: 10 Jul 00:22:25 ntpd[1973]: Listen normally on 13 lxc88827883f98f [fe80::58f6:19ff:febf:dfdb%12]:123 Jul 10 00:22:25.371794 ntpd[1973]: Listen normally on 9 cilium_host [fe80::680a:feff:fe1d:67bc%5]:123 Jul 10 00:22:25.371835 ntpd[1973]: Listen normally on 10 cilium_vxlan [fe80::949a:37ff:feb8:1db4%6]:123 Jul 10 00:22:25.371875 ntpd[1973]: Listen normally on 11 lxc_health [fe80::1cc7:15ff:fe86:d047%8]:123 Jul 10 00:22:25.373379 ntpd[1973]: Listen normally on 12 lxc3ff68a2a33ac [fe80::787d:81ff:feec:61b5%10]:123 Jul 10 00:22:25.373485 ntpd[1973]: Listen normally on 13 lxc88827883f98f [fe80::58f6:19ff:febf:dfdb%12]:123 Jul 10 00:22:25.731366 containerd[2015]: time="2025-07-10T00:22:25.731256539Z" level=info msg="connecting to shim ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c" address="unix:///run/containerd/s/b207cc1fb7652b8fdb08b56e4d0a2c83306c5f486fe11c61e2b6676c79b41ded" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:25.776188 containerd[2015]: time="2025-07-10T00:22:25.774081374Z" level=info msg="connecting to shim 53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d" address="unix:///run/containerd/s/9165c22fdb9829b70cb593b6189ab21444ea64f2778485030a960504724eee59" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:25.815665 systemd[1]: Started cri-containerd-ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c.scope - libcontainer container ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c. Jul 10 00:22:25.823320 systemd[1]: Started cri-containerd-53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d.scope - libcontainer container 53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d. Jul 10 00:22:25.965209 containerd[2015]: time="2025-07-10T00:22:25.965099173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgx4m,Uid:d1278637-ca9f-4dc4-8f1c-321c26e3d13d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d\"" Jul 10 00:22:25.970529 containerd[2015]: time="2025-07-10T00:22:25.970494018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s65wj,Uid:198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c\"" Jul 10 00:22:26.006383 containerd[2015]: time="2025-07-10T00:22:26.006258576Z" level=info msg="CreateContainer within sandbox \"ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:26.006979 containerd[2015]: time="2025-07-10T00:22:26.006630237Z" level=info msg="CreateContainer within sandbox \"53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:26.035672 containerd[2015]: time="2025-07-10T00:22:26.035634079Z" level=info msg="Container a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:26.036242 containerd[2015]: time="2025-07-10T00:22:26.036207281Z" level=info msg="Container 3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:26.046419 containerd[2015]: time="2025-07-10T00:22:26.046384375Z" level=info msg="CreateContainer within sandbox \"ea0e8bf6b4d3351411c3d29046a995ee946a1a0c6b9c34b8f082a9f9da79cb6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb\"" Jul 10 00:22:26.047299 containerd[2015]: time="2025-07-10T00:22:26.047270116Z" level=info msg="StartContainer for \"3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb\"" Jul 10 00:22:26.047986 containerd[2015]: time="2025-07-10T00:22:26.047956676Z" level=info msg="connecting to shim 3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb" address="unix:///run/containerd/s/b207cc1fb7652b8fdb08b56e4d0a2c83306c5f486fe11c61e2b6676c79b41ded" protocol=ttrpc version=3 Jul 10 00:22:26.049582 containerd[2015]: time="2025-07-10T00:22:26.049482145Z" level=info msg="CreateContainer within sandbox \"53cbf4d605e148e6bdb0981256d25c1047403b7428588065b8948ed1796ca26d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71\"" Jul 10 00:22:26.051830 containerd[2015]: time="2025-07-10T00:22:26.051808482Z" level=info msg="StartContainer for \"a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71\"" Jul 10 00:22:26.053680 containerd[2015]: time="2025-07-10T00:22:26.053650013Z" level=info msg="connecting to shim a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71" address="unix:///run/containerd/s/9165c22fdb9829b70cb593b6189ab21444ea64f2778485030a960504724eee59" protocol=ttrpc version=3 Jul 10 00:22:26.071337 systemd[1]: Started cri-containerd-3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb.scope - libcontainer container 3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb. Jul 10 00:22:26.074734 systemd[1]: Started cri-containerd-a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71.scope - libcontainer container a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71. Jul 10 00:22:26.117669 containerd[2015]: time="2025-07-10T00:22:26.117617232Z" level=info msg="StartContainer for \"a88cd5a5d568ade1906f68789d7b6617d0c754f00b864f9f708ff28ed813ca71\" returns successfully" Jul 10 00:22:26.118052 containerd[2015]: time="2025-07-10T00:22:26.118017605Z" level=info msg="StartContainer for \"3eed38ce89d255f6bae95acdfa3982aacbc97a19f936c66499bc315e832f5bbb\" returns successfully" Jul 10 00:22:26.447725 kubelet[3309]: I0710 00:22:26.447119 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mgx4m" podStartSLOduration=28.447097252 podStartE2EDuration="28.447097252s" podCreationTimestamp="2025-07-10 00:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:26.445422849 +0000 UTC m=+34.323032320" watchObservedRunningTime="2025-07-10 00:22:26.447097252 +0000 UTC m=+34.324706677" Jul 10 00:22:26.714453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466526681.mount: Deactivated successfully. Jul 10 00:22:27.438979 kubelet[3309]: I0710 00:22:27.438593 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s65wj" podStartSLOduration=29.438572982 podStartE2EDuration="29.438572982s" podCreationTimestamp="2025-07-10 00:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:26.468178949 +0000 UTC m=+34.345788397" watchObservedRunningTime="2025-07-10 00:22:27.438572982 +0000 UTC m=+35.316182406" Jul 10 00:22:28.472538 systemd[1]: Started sshd@9-172.31.26.174:22-139.178.89.65:40456.service - OpenSSH per-connection server daemon (139.178.89.65:40456). Jul 10 00:22:28.683729 sshd[4878]: Accepted publickey for core from 139.178.89.65 port 40456 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:28.685613 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:28.700622 systemd-logind[1989]: New session 10 of user core. Jul 10 00:22:28.706358 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:22:29.494070 sshd[4882]: Connection closed by 139.178.89.65 port 40456 Jul 10 00:22:29.495006 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:29.499596 systemd[1]: sshd@9-172.31.26.174:22-139.178.89.65:40456.service: Deactivated successfully. Jul 10 00:22:29.501881 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:22:29.503038 systemd-logind[1989]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:22:29.504965 systemd-logind[1989]: Removed session 10. Jul 10 00:22:34.528117 systemd[1]: Started sshd@10-172.31.26.174:22-139.178.89.65:50024.service - OpenSSH per-connection server daemon (139.178.89.65:50024). Jul 10 00:22:34.710473 sshd[4897]: Accepted publickey for core from 139.178.89.65 port 50024 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:34.712004 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:34.717228 systemd-logind[1989]: New session 11 of user core. Jul 10 00:22:34.727423 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:22:34.923793 sshd[4899]: Connection closed by 139.178.89.65 port 50024 Jul 10 00:22:34.924416 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:34.928481 systemd[1]: sshd@10-172.31.26.174:22-139.178.89.65:50024.service: Deactivated successfully. Jul 10 00:22:34.930611 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:22:34.931937 systemd-logind[1989]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:22:34.933800 systemd-logind[1989]: Removed session 11. Jul 10 00:22:39.963459 systemd[1]: Started sshd@11-172.31.26.174:22-139.178.89.65:50912.service - OpenSSH per-connection server daemon (139.178.89.65:50912). Jul 10 00:22:40.161400 sshd[4913]: Accepted publickey for core from 139.178.89.65 port 50912 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:40.163015 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:40.168591 systemd-logind[1989]: New session 12 of user core. Jul 10 00:22:40.172383 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:22:40.373875 sshd[4915]: Connection closed by 139.178.89.65 port 50912 Jul 10 00:22:40.374407 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:40.378300 systemd[1]: sshd@11-172.31.26.174:22-139.178.89.65:50912.service: Deactivated successfully. Jul 10 00:22:40.380287 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:22:40.381449 systemd-logind[1989]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:22:40.383002 systemd-logind[1989]: Removed session 12. Jul 10 00:22:45.412515 systemd[1]: Started sshd@12-172.31.26.174:22-139.178.89.65:50916.service - OpenSSH per-connection server daemon (139.178.89.65:50916). Jul 10 00:22:45.587121 sshd[4928]: Accepted publickey for core from 139.178.89.65 port 50916 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:45.588540 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:45.593208 systemd-logind[1989]: New session 13 of user core. Jul 10 00:22:45.598346 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:22:45.778716 sshd[4930]: Connection closed by 139.178.89.65 port 50916 Jul 10 00:22:45.779439 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:45.783134 systemd[1]: sshd@12-172.31.26.174:22-139.178.89.65:50916.service: Deactivated successfully. Jul 10 00:22:45.785000 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:22:45.786094 systemd-logind[1989]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:22:45.787405 systemd-logind[1989]: Removed session 13. Jul 10 00:22:45.811264 systemd[1]: Started sshd@13-172.31.26.174:22-139.178.89.65:50922.service - OpenSSH per-connection server daemon (139.178.89.65:50922). Jul 10 00:22:45.983741 sshd[4943]: Accepted publickey for core from 139.178.89.65 port 50922 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:45.985235 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:45.990997 systemd-logind[1989]: New session 14 of user core. Jul 10 00:22:45.995427 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:22:46.233082 sshd[4945]: Connection closed by 139.178.89.65 port 50922 Jul 10 00:22:46.235240 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:46.239615 systemd-logind[1989]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:22:46.241769 systemd[1]: sshd@13-172.31.26.174:22-139.178.89.65:50922.service: Deactivated successfully. Jul 10 00:22:46.245428 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:22:46.248403 systemd-logind[1989]: Removed session 14. Jul 10 00:22:46.266643 systemd[1]: Started sshd@14-172.31.26.174:22-139.178.89.65:50924.service - OpenSSH per-connection server daemon (139.178.89.65:50924). Jul 10 00:22:46.434599 sshd[4955]: Accepted publickey for core from 139.178.89.65 port 50924 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:46.436297 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:46.442342 systemd-logind[1989]: New session 15 of user core. Jul 10 00:22:46.445398 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:22:46.642006 sshd[4957]: Connection closed by 139.178.89.65 port 50924 Jul 10 00:22:46.642589 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:46.645851 systemd[1]: sshd@14-172.31.26.174:22-139.178.89.65:50924.service: Deactivated successfully. Jul 10 00:22:46.648208 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:22:46.650393 systemd-logind[1989]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:22:46.651839 systemd-logind[1989]: Removed session 15. Jul 10 00:22:51.677387 systemd[1]: Started sshd@15-172.31.26.174:22-139.178.89.65:43378.service - OpenSSH per-connection server daemon (139.178.89.65:43378). Jul 10 00:22:51.853646 sshd[4969]: Accepted publickey for core from 139.178.89.65 port 43378 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:51.856085 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:51.864236 systemd-logind[1989]: New session 16 of user core. Jul 10 00:22:51.871370 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:22:52.055074 sshd[4971]: Connection closed by 139.178.89.65 port 43378 Jul 10 00:22:52.056140 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:52.060501 systemd[1]: sshd@15-172.31.26.174:22-139.178.89.65:43378.service: Deactivated successfully. Jul 10 00:22:52.062982 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:22:52.064032 systemd-logind[1989]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:22:52.065700 systemd-logind[1989]: Removed session 16. Jul 10 00:22:57.088379 systemd[1]: Started sshd@16-172.31.26.174:22-139.178.89.65:43382.service - OpenSSH per-connection server daemon (139.178.89.65:43382). Jul 10 00:22:57.257559 sshd[4987]: Accepted publickey for core from 139.178.89.65 port 43382 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:57.259022 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:57.264202 systemd-logind[1989]: New session 17 of user core. Jul 10 00:22:57.268333 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:22:57.462273 sshd[4989]: Connection closed by 139.178.89.65 port 43382 Jul 10 00:22:57.463142 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:57.467469 systemd[1]: sshd@16-172.31.26.174:22-139.178.89.65:43382.service: Deactivated successfully. Jul 10 00:22:57.469736 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:22:57.470524 systemd-logind[1989]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:22:57.472925 systemd-logind[1989]: Removed session 17. Jul 10 00:22:57.495067 systemd[1]: Started sshd@17-172.31.26.174:22-139.178.89.65:43388.service - OpenSSH per-connection server daemon (139.178.89.65:43388). Jul 10 00:22:57.664462 sshd[5001]: Accepted publickey for core from 139.178.89.65 port 43388 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:57.665756 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:57.671230 systemd-logind[1989]: New session 18 of user core. Jul 10 00:22:57.674361 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:22:58.419385 sshd[5003]: Connection closed by 139.178.89.65 port 43388 Jul 10 00:22:58.420107 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:58.430644 systemd[1]: sshd@17-172.31.26.174:22-139.178.89.65:43388.service: Deactivated successfully. Jul 10 00:22:58.433046 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:22:58.434476 systemd-logind[1989]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:22:58.436332 systemd-logind[1989]: Removed session 18. Jul 10 00:22:58.451485 systemd[1]: Started sshd@18-172.31.26.174:22-139.178.89.65:43398.service - OpenSSH per-connection server daemon (139.178.89.65:43398). Jul 10 00:22:58.653562 sshd[5013]: Accepted publickey for core from 139.178.89.65 port 43398 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:58.655005 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:58.660819 systemd-logind[1989]: New session 19 of user core. Jul 10 00:22:58.669346 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:22:59.689645 sshd[5015]: Connection closed by 139.178.89.65 port 43398 Jul 10 00:22:59.690200 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:59.697122 systemd[1]: sshd@18-172.31.26.174:22-139.178.89.65:43398.service: Deactivated successfully. Jul 10 00:22:59.697644 systemd-logind[1989]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:22:59.700840 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:22:59.704528 systemd-logind[1989]: Removed session 19. Jul 10 00:22:59.722439 systemd[1]: Started sshd@19-172.31.26.174:22-139.178.89.65:53712.service - OpenSSH per-connection server daemon (139.178.89.65:53712). Jul 10 00:22:59.897814 sshd[5033]: Accepted publickey for core from 139.178.89.65 port 53712 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:22:59.899277 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:59.904496 systemd-logind[1989]: New session 20 of user core. Jul 10 00:22:59.909365 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:23:00.275219 sshd[5035]: Connection closed by 139.178.89.65 port 53712 Jul 10 00:23:00.275775 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:00.279953 systemd[1]: sshd@19-172.31.26.174:22-139.178.89.65:53712.service: Deactivated successfully. Jul 10 00:23:00.281837 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:23:00.283361 systemd-logind[1989]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:23:00.286351 systemd-logind[1989]: Removed session 20. Jul 10 00:23:00.310517 systemd[1]: Started sshd@20-172.31.26.174:22-139.178.89.65:53716.service - OpenSSH per-connection server daemon (139.178.89.65:53716). Jul 10 00:23:00.487495 sshd[5045]: Accepted publickey for core from 139.178.89.65 port 53716 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:00.488942 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:00.495177 systemd-logind[1989]: New session 21 of user core. Jul 10 00:23:00.502387 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:23:00.690786 sshd[5047]: Connection closed by 139.178.89.65 port 53716 Jul 10 00:23:00.691519 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:00.696110 systemd[1]: sshd@20-172.31.26.174:22-139.178.89.65:53716.service: Deactivated successfully. Jul 10 00:23:00.698926 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:23:00.700011 systemd-logind[1989]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:23:00.702057 systemd-logind[1989]: Removed session 21. Jul 10 00:23:05.728298 systemd[1]: Started sshd@21-172.31.26.174:22-139.178.89.65:53718.service - OpenSSH per-connection server daemon (139.178.89.65:53718). Jul 10 00:23:05.907053 sshd[5061]: Accepted publickey for core from 139.178.89.65 port 53718 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:05.908608 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:05.915007 systemd-logind[1989]: New session 22 of user core. Jul 10 00:23:05.920333 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:23:06.100237 sshd[5063]: Connection closed by 139.178.89.65 port 53718 Jul 10 00:23:06.101004 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:06.104852 systemd[1]: sshd@21-172.31.26.174:22-139.178.89.65:53718.service: Deactivated successfully. Jul 10 00:23:06.106926 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:23:06.107996 systemd-logind[1989]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:23:06.109545 systemd-logind[1989]: Removed session 22. Jul 10 00:23:11.136136 systemd[1]: Started sshd@22-172.31.26.174:22-139.178.89.65:45760.service - OpenSSH per-connection server daemon (139.178.89.65:45760). Jul 10 00:23:11.302821 sshd[5077]: Accepted publickey for core from 139.178.89.65 port 45760 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:11.304087 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:11.309318 systemd-logind[1989]: New session 23 of user core. Jul 10 00:23:11.318370 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:23:11.490433 sshd[5079]: Connection closed by 139.178.89.65 port 45760 Jul 10 00:23:11.491273 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:11.495446 systemd[1]: sshd@22-172.31.26.174:22-139.178.89.65:45760.service: Deactivated successfully. Jul 10 00:23:11.497974 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:23:11.499506 systemd-logind[1989]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:23:11.501143 systemd-logind[1989]: Removed session 23. Jul 10 00:23:16.526539 systemd[1]: Started sshd@23-172.31.26.174:22-139.178.89.65:45764.service - OpenSSH per-connection server daemon (139.178.89.65:45764). Jul 10 00:23:16.697391 sshd[5091]: Accepted publickey for core from 139.178.89.65 port 45764 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:16.699044 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:16.703990 systemd-logind[1989]: New session 24 of user core. Jul 10 00:23:16.712400 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:23:16.895664 sshd[5093]: Connection closed by 139.178.89.65 port 45764 Jul 10 00:23:16.896229 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:16.900033 systemd[1]: sshd@23-172.31.26.174:22-139.178.89.65:45764.service: Deactivated successfully. Jul 10 00:23:16.901763 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:23:16.902742 systemd-logind[1989]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:23:16.904393 systemd-logind[1989]: Removed session 24. Jul 10 00:23:21.928418 systemd[1]: Started sshd@24-172.31.26.174:22-139.178.89.65:58822.service - OpenSSH per-connection server daemon (139.178.89.65:58822). Jul 10 00:23:22.102962 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 58822 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:22.104313 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:22.109210 systemd-logind[1989]: New session 25 of user core. Jul 10 00:23:22.114332 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:23:22.294546 sshd[5107]: Connection closed by 139.178.89.65 port 58822 Jul 10 00:23:22.295410 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:22.299025 systemd[1]: sshd@24-172.31.26.174:22-139.178.89.65:58822.service: Deactivated successfully. Jul 10 00:23:22.300877 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:23:22.301706 systemd-logind[1989]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:23:22.303312 systemd-logind[1989]: Removed session 25. Jul 10 00:23:22.329568 systemd[1]: Started sshd@25-172.31.26.174:22-139.178.89.65:58828.service - OpenSSH per-connection server daemon (139.178.89.65:58828). Jul 10 00:23:22.514258 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 58828 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:22.515687 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:22.520201 systemd-logind[1989]: New session 26 of user core. Jul 10 00:23:22.527352 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:23:24.242533 containerd[2015]: time="2025-07-10T00:23:24.242490301Z" level=info msg="StopContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" with timeout 30 (s)" Jul 10 00:23:24.244891 containerd[2015]: time="2025-07-10T00:23:24.244860851Z" level=info msg="Stop container \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" with signal terminated" Jul 10 00:23:24.258656 systemd[1]: cri-containerd-475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b.scope: Deactivated successfully. Jul 10 00:23:24.262514 containerd[2015]: time="2025-07-10T00:23:24.262484302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" id:\"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" pid:4183 exited_at:{seconds:1752107004 nanos:261907082}" Jul 10 00:23:24.262613 containerd[2015]: time="2025-07-10T00:23:24.262554824Z" level=info msg="received exit event container_id:\"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" id:\"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" pid:4183 exited_at:{seconds:1752107004 nanos:261907082}" Jul 10 00:23:24.279487 containerd[2015]: time="2025-07-10T00:23:24.279431160Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:23:24.279697 containerd[2015]: time="2025-07-10T00:23:24.279619126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" id:\"e7a171e5a1a0789c2959c360e2fb993d9da7f8f9c18207db4dfb97ea40305bfa\" pid:5146 exited_at:{seconds:1752107004 nanos:278383475}" Jul 10 00:23:24.284028 containerd[2015]: time="2025-07-10T00:23:24.283572021Z" level=info msg="StopContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" with timeout 2 (s)" Jul 10 00:23:24.286681 containerd[2015]: time="2025-07-10T00:23:24.286626649Z" level=info msg="Stop container \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" with signal terminated" Jul 10 00:23:24.293977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b-rootfs.mount: Deactivated successfully. Jul 10 00:23:24.302067 systemd-networkd[1734]: lxc_health: Link DOWN Jul 10 00:23:24.302075 systemd-networkd[1734]: lxc_health: Lost carrier Jul 10 00:23:24.319767 systemd[1]: cri-containerd-435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33.scope: Deactivated successfully. Jul 10 00:23:24.320138 systemd[1]: cri-containerd-435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33.scope: Consumed 7.682s CPU time, 200.8M memory peak, 85.1M read from disk, 13.3M written to disk. Jul 10 00:23:24.322787 containerd[2015]: time="2025-07-10T00:23:24.322549453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" pid:4219 exited_at:{seconds:1752107004 nanos:321743596}" Jul 10 00:23:24.322787 containerd[2015]: time="2025-07-10T00:23:24.322588301Z" level=info msg="received exit event container_id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" id:\"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" pid:4219 exited_at:{seconds:1752107004 nanos:321743596}" Jul 10 00:23:24.323642 containerd[2015]: time="2025-07-10T00:23:24.323500423Z" level=info msg="StopContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" returns successfully" Jul 10 00:23:24.324434 containerd[2015]: time="2025-07-10T00:23:24.324400059Z" level=info msg="StopPodSandbox for \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\"" Jul 10 00:23:24.339799 containerd[2015]: time="2025-07-10T00:23:24.339511378Z" level=info msg="Container to stop \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.353488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33-rootfs.mount: Deactivated successfully. Jul 10 00:23:24.355807 systemd[1]: cri-containerd-add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3.scope: Deactivated successfully. Jul 10 00:23:24.359946 containerd[2015]: time="2025-07-10T00:23:24.359902255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" id:\"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" pid:3797 exit_status:137 exited_at:{seconds:1752107004 nanos:359152449}" Jul 10 00:23:24.387626 containerd[2015]: time="2025-07-10T00:23:24.387581257Z" level=info msg="StopContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" returns successfully" Jul 10 00:23:24.388520 containerd[2015]: time="2025-07-10T00:23:24.388480567Z" level=info msg="StopPodSandbox for \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\"" Jul 10 00:23:24.388623 containerd[2015]: time="2025-07-10T00:23:24.388546486Z" level=info msg="Container to stop \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.388623 containerd[2015]: time="2025-07-10T00:23:24.388564666Z" level=info msg="Container to stop \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.388623 containerd[2015]: time="2025-07-10T00:23:24.388577039Z" level=info msg="Container to stop \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.388623 containerd[2015]: time="2025-07-10T00:23:24.388588956Z" level=info msg="Container to stop \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.388623 containerd[2015]: time="2025-07-10T00:23:24.388600931Z" level=info msg="Container to stop \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:24.399590 systemd[1]: cri-containerd-1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3.scope: Deactivated successfully. Jul 10 00:23:24.407123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3-rootfs.mount: Deactivated successfully. Jul 10 00:23:24.421760 containerd[2015]: time="2025-07-10T00:23:24.421631308Z" level=info msg="shim disconnected" id=add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3 namespace=k8s.io Jul 10 00:23:24.421760 containerd[2015]: time="2025-07-10T00:23:24.421700722Z" level=warning msg="cleaning up after shim disconnected" id=add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3 namespace=k8s.io Jul 10 00:23:24.440493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3-rootfs.mount: Deactivated successfully. Jul 10 00:23:24.443261 containerd[2015]: time="2025-07-10T00:23:24.421713080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:23:24.449934 containerd[2015]: time="2025-07-10T00:23:24.449891187Z" level=info msg="shim disconnected" id=1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3 namespace=k8s.io Jul 10 00:23:24.449934 containerd[2015]: time="2025-07-10T00:23:24.449922539Z" level=warning msg="cleaning up after shim disconnected" id=1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3 namespace=k8s.io Jul 10 00:23:24.450325 containerd[2015]: time="2025-07-10T00:23:24.449933415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:23:24.476900 containerd[2015]: time="2025-07-10T00:23:24.475579373Z" level=info msg="received exit event sandbox_id:\"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" exit_status:137 exited_at:{seconds:1752107004 nanos:359152449}" Jul 10 00:23:24.476900 containerd[2015]: time="2025-07-10T00:23:24.476112477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" id:\"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" pid:3719 exit_status:137 exited_at:{seconds:1752107004 nanos:408793405}" Jul 10 00:23:24.482366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3-shm.mount: Deactivated successfully. Jul 10 00:23:24.491769 containerd[2015]: time="2025-07-10T00:23:24.490524844Z" level=info msg="TearDown network for sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" successfully" Jul 10 00:23:24.491769 containerd[2015]: time="2025-07-10T00:23:24.490573444Z" level=info msg="StopPodSandbox for \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" returns successfully" Jul 10 00:23:24.491769 containerd[2015]: time="2025-07-10T00:23:24.490744277Z" level=info msg="received exit event sandbox_id:\"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" exit_status:137 exited_at:{seconds:1752107004 nanos:408793405}" Jul 10 00:23:24.491769 containerd[2015]: time="2025-07-10T00:23:24.491653432Z" level=info msg="TearDown network for sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" successfully" Jul 10 00:23:24.491769 containerd[2015]: time="2025-07-10T00:23:24.491687435Z" level=info msg="StopPodSandbox for \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" returns successfully" Jul 10 00:23:24.539500 kubelet[3309]: I0710 00:23:24.539250 3309 scope.go:117] "RemoveContainer" containerID="475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b" Jul 10 00:23:24.546754 containerd[2015]: time="2025-07-10T00:23:24.546221844Z" level=info msg="RemoveContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\"" Jul 10 00:23:24.560836 containerd[2015]: time="2025-07-10T00:23:24.560716910Z" level=info msg="RemoveContainer for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" returns successfully" Jul 10 00:23:24.561188 kubelet[3309]: I0710 00:23:24.561134 3309 scope.go:117] "RemoveContainer" containerID="475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b" Jul 10 00:23:24.561627 containerd[2015]: time="2025-07-10T00:23:24.561588179Z" level=error msg="ContainerStatus for \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\": not found" Jul 10 00:23:24.562196 kubelet[3309]: E0710 00:23:24.561926 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\": not found" containerID="475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b" Jul 10 00:23:24.562196 kubelet[3309]: I0710 00:23:24.561969 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b"} err="failed to get container status \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"475e51c92a68aeae7906da0fcc23a54918d080bbd7f4cee3b1a2c8d9df3c1a1b\": not found" Jul 10 00:23:24.562196 kubelet[3309]: I0710 00:23:24.562096 3309 scope.go:117] "RemoveContainer" containerID="435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33" Jul 10 00:23:24.564763 containerd[2015]: time="2025-07-10T00:23:24.564736090Z" level=info msg="RemoveContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\"" Jul 10 00:23:24.573931 containerd[2015]: time="2025-07-10T00:23:24.573838030Z" level=info msg="RemoveContainer for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" returns successfully" Jul 10 00:23:24.574057 kubelet[3309]: I0710 00:23:24.574039 3309 scope.go:117] "RemoveContainer" containerID="1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2" Jul 10 00:23:24.575922 containerd[2015]: time="2025-07-10T00:23:24.575890811Z" level=info msg="RemoveContainer for \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\"" Jul 10 00:23:24.582308 containerd[2015]: time="2025-07-10T00:23:24.582277852Z" level=info msg="RemoveContainer for \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" returns successfully" Jul 10 00:23:24.582549 kubelet[3309]: I0710 00:23:24.582532 3309 scope.go:117] "RemoveContainer" containerID="90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd" Jul 10 00:23:24.584890 containerd[2015]: time="2025-07-10T00:23:24.584863902Z" level=info msg="RemoveContainer for \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\"" Jul 10 00:23:24.590849 containerd[2015]: time="2025-07-10T00:23:24.590815861Z" level=info msg="RemoveContainer for \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" returns successfully" Jul 10 00:23:24.591033 kubelet[3309]: I0710 00:23:24.591010 3309 scope.go:117] "RemoveContainer" containerID="8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de" Jul 10 00:23:24.592331 containerd[2015]: time="2025-07-10T00:23:24.592303318Z" level=info msg="RemoveContainer for \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\"" Jul 10 00:23:24.597869 containerd[2015]: time="2025-07-10T00:23:24.597835511Z" level=info msg="RemoveContainer for \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" returns successfully" Jul 10 00:23:24.598086 kubelet[3309]: I0710 00:23:24.598066 3309 scope.go:117] "RemoveContainer" containerID="c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5" Jul 10 00:23:24.599584 containerd[2015]: time="2025-07-10T00:23:24.599560993Z" level=info msg="RemoveContainer for \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\"" Jul 10 00:23:24.604773 containerd[2015]: time="2025-07-10T00:23:24.604739614Z" level=info msg="RemoveContainer for \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" returns successfully" Jul 10 00:23:24.605055 kubelet[3309]: I0710 00:23:24.605027 3309 scope.go:117] "RemoveContainer" containerID="435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33" Jul 10 00:23:24.605376 containerd[2015]: time="2025-07-10T00:23:24.605335528Z" level=error msg="ContainerStatus for \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\": not found" Jul 10 00:23:24.605492 kubelet[3309]: E0710 00:23:24.605461 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\": not found" containerID="435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33" Jul 10 00:23:24.605567 kubelet[3309]: I0710 00:23:24.605501 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33"} err="failed to get container status \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\": rpc error: code = NotFound desc = an error occurred when try to find container \"435805a7e8558db670158d626d950e46187d52722a0a9a1f262fe737f416cb33\": not found" Jul 10 00:23:24.605567 kubelet[3309]: I0710 00:23:24.605520 3309 scope.go:117] "RemoveContainer" containerID="1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2" Jul 10 00:23:24.605718 containerd[2015]: time="2025-07-10T00:23:24.605691752Z" level=error msg="ContainerStatus for \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\": not found" Jul 10 00:23:24.605846 kubelet[3309]: E0710 00:23:24.605817 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\": not found" containerID="1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2" Jul 10 00:23:24.605846 kubelet[3309]: I0710 00:23:24.605839 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2"} err="failed to get container status \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e05f6c6658031a258f91fe699847e22dd7377135147add6917586ff8f27f5b2\": not found" Jul 10 00:23:24.605911 kubelet[3309]: I0710 00:23:24.605852 3309 scope.go:117] "RemoveContainer" containerID="90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd" Jul 10 00:23:24.606050 containerd[2015]: time="2025-07-10T00:23:24.605959899Z" level=error msg="ContainerStatus for \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\": not found" Jul 10 00:23:24.606169 kubelet[3309]: E0710 00:23:24.606112 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\": not found" containerID="90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd" Jul 10 00:23:24.606169 kubelet[3309]: I0710 00:23:24.606133 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd"} err="failed to get container status \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"90594256961136f62e9e76c5183c8278be12d8f19aa6b38e375fb808df13bbcd\": not found" Jul 10 00:23:24.606169 kubelet[3309]: I0710 00:23:24.606149 3309 scope.go:117] "RemoveContainer" containerID="8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de" Jul 10 00:23:24.606418 containerd[2015]: time="2025-07-10T00:23:24.606322882Z" level=error msg="ContainerStatus for \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\": not found" Jul 10 00:23:24.606506 kubelet[3309]: E0710 00:23:24.606473 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\": not found" containerID="8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de" Jul 10 00:23:24.606506 kubelet[3309]: I0710 00:23:24.606490 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de"} err="failed to get container status \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d47864354152d4af430384ae8a5441dd78071119c73f71f6f6f22abd2ef80de\": not found" Jul 10 00:23:24.606506 kubelet[3309]: I0710 00:23:24.606502 3309 scope.go:117] "RemoveContainer" containerID="c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5" Jul 10 00:23:24.606805 containerd[2015]: time="2025-07-10T00:23:24.606782994Z" level=error msg="ContainerStatus for \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\": not found" Jul 10 00:23:24.606923 kubelet[3309]: E0710 00:23:24.606902 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\": not found" containerID="c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5" Jul 10 00:23:24.606960 kubelet[3309]: I0710 00:23:24.606924 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5"} err="failed to get container status \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8cbde62aac640397cbf79fcb279c9265a2d23c752b6d58d70fc678e2a6e35b5\": not found" Jul 10 00:23:24.627210 kubelet[3309]: I0710 00:23:24.627143 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-xtables-lock\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.627354 kubelet[3309]: I0710 00:23:24.627294 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-cgroup\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.627354 kubelet[3309]: I0710 00:23:24.627223 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.627408 kubelet[3309]: I0710 00:23:24.627364 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-hostproc\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.627436 kubelet[3309]: I0710 00:23:24.627413 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.627436 kubelet[3309]: I0710 00:23:24.627430 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.627496 kubelet[3309]: I0710 00:23:24.627441 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsrg8\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-kube-api-access-bsrg8\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627552 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-config-path\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627588 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wdb6\" (UniqueName: \"kubernetes.io/projected/4fd23dc4-5134-4024-bfc1-846d29b52788-kube-api-access-4wdb6\") pod \"4fd23dc4-5134-4024-bfc1-846d29b52788\" (UID: \"4fd23dc4-5134-4024-bfc1-846d29b52788\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627607 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-lib-modules\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627623 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-hubble-tls\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627637 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-kernel\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628054 kubelet[3309]: I0710 00:23:24.627653 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fd23dc4-5134-4024-bfc1-846d29b52788-cilium-config-path\") pod \"4fd23dc4-5134-4024-bfc1-846d29b52788\" (UID: \"4fd23dc4-5134-4024-bfc1-846d29b52788\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627669 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-net\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627690 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-run\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627706 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e054203-3879-496d-b601-3f8aa77e7cab-clustermesh-secrets\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627721 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-bpf-maps\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627735 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-etc-cni-netd\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.628345 kubelet[3309]: I0710 00:23:24.627749 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cni-path\") pod \"3e054203-3879-496d-b601-3f8aa77e7cab\" (UID: \"3e054203-3879-496d-b601-3f8aa77e7cab\") " Jul 10 00:23:24.629280 kubelet[3309]: I0710 00:23:24.627790 3309 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-hostproc\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.629280 kubelet[3309]: I0710 00:23:24.627802 3309 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-xtables-lock\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.629280 kubelet[3309]: I0710 00:23:24.627810 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-cgroup\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.629280 kubelet[3309]: I0710 00:23:24.627838 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.629638 kubelet[3309]: I0710 00:23:24.629612 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:23:24.629689 kubelet[3309]: I0710 00:23:24.629656 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.629689 kubelet[3309]: I0710 00:23:24.629671 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.630529 kubelet[3309]: I0710 00:23:24.630471 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.630757 kubelet[3309]: I0710 00:23:24.630708 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.630951 kubelet[3309]: I0710 00:23:24.630914 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.630951 kubelet[3309]: I0710 00:23:24.630937 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:23:24.634760 kubelet[3309]: I0710 00:23:24.634688 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd23dc4-5134-4024-bfc1-846d29b52788-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fd23dc4-5134-4024-bfc1-846d29b52788" (UID: "4fd23dc4-5134-4024-bfc1-846d29b52788"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:23:24.636445 kubelet[3309]: I0710 00:23:24.636363 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd23dc4-5134-4024-bfc1-846d29b52788-kube-api-access-4wdb6" (OuterVolumeSpecName: "kube-api-access-4wdb6") pod "4fd23dc4-5134-4024-bfc1-846d29b52788" (UID: "4fd23dc4-5134-4024-bfc1-846d29b52788"). InnerVolumeSpecName "kube-api-access-4wdb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:23:24.636555 kubelet[3309]: I0710 00:23:24.636441 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:23:24.636634 kubelet[3309]: I0710 00:23:24.636613 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e054203-3879-496d-b601-3f8aa77e7cab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:23:24.636675 kubelet[3309]: I0710 00:23:24.636617 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-kube-api-access-bsrg8" (OuterVolumeSpecName: "kube-api-access-bsrg8") pod "3e054203-3879-496d-b601-3f8aa77e7cab" (UID: "3e054203-3879-496d-b601-3f8aa77e7cab"). InnerVolumeSpecName "kube-api-access-bsrg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:23:24.728628 kubelet[3309]: I0710 00:23:24.728588 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wdb6\" (UniqueName: \"kubernetes.io/projected/4fd23dc4-5134-4024-bfc1-846d29b52788-kube-api-access-4wdb6\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728628 kubelet[3309]: I0710 00:23:24.728622 3309 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-lib-modules\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728628 kubelet[3309]: I0710 00:23:24.728632 3309 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-hubble-tls\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728628 kubelet[3309]: I0710 00:23:24.728640 3309 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-kernel\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728653 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fd23dc4-5134-4024-bfc1-846d29b52788-cilium-config-path\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728662 3309 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-host-proc-sys-net\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728669 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-run\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728677 3309 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e054203-3879-496d-b601-3f8aa77e7cab-clustermesh-secrets\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728684 3309 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-bpf-maps\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728691 3309 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-etc-cni-netd\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728700 3309 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e054203-3879-496d-b601-3f8aa77e7cab-cni-path\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.728858 kubelet[3309]: I0710 00:23:24.728708 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bsrg8\" (UniqueName: \"kubernetes.io/projected/3e054203-3879-496d-b601-3f8aa77e7cab-kube-api-access-bsrg8\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.730091 kubelet[3309]: I0710 00:23:24.728715 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e054203-3879-496d-b601-3f8aa77e7cab-cilium-config-path\") on node \"ip-172-31-26-174\" DevicePath \"\"" Jul 10 00:23:24.843304 systemd[1]: Removed slice kubepods-besteffort-pod4fd23dc4_5134_4024_bfc1_846d29b52788.slice - libcontainer container kubepods-besteffort-pod4fd23dc4_5134_4024_bfc1_846d29b52788.slice. Jul 10 00:23:24.860215 systemd[1]: Removed slice kubepods-burstable-pod3e054203_3879_496d_b601_3f8aa77e7cab.slice - libcontainer container kubepods-burstable-pod3e054203_3879_496d_b601_3f8aa77e7cab.slice. Jul 10 00:23:24.860553 systemd[1]: kubepods-burstable-pod3e054203_3879_496d_b601_3f8aa77e7cab.slice: Consumed 7.774s CPU time, 201.2M memory peak, 85.1M read from disk, 13.3M written to disk. Jul 10 00:23:25.294136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3-shm.mount: Deactivated successfully. Jul 10 00:23:25.294260 systemd[1]: var-lib-kubelet-pods-4fd23dc4\x2d5134\x2d4024\x2dbfc1\x2d846d29b52788-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4wdb6.mount: Deactivated successfully. Jul 10 00:23:25.294326 systemd[1]: var-lib-kubelet-pods-3e054203\x2d3879\x2d496d\x2db601\x2d3f8aa77e7cab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbsrg8.mount: Deactivated successfully. Jul 10 00:23:25.294383 systemd[1]: var-lib-kubelet-pods-3e054203\x2d3879\x2d496d\x2db601\x2d3f8aa77e7cab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:23:25.294447 systemd[1]: var-lib-kubelet-pods-3e054203\x2d3879\x2d496d\x2db601\x2d3f8aa77e7cab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:23:26.201756 sshd[5121]: Connection closed by 139.178.89.65 port 58828 Jul 10 00:23:26.202612 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:26.207557 systemd[1]: sshd@25-172.31.26.174:22-139.178.89.65:58828.service: Deactivated successfully. Jul 10 00:23:26.209829 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:23:26.212874 systemd-logind[1989]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:23:26.214843 systemd-logind[1989]: Removed session 26. Jul 10 00:23:26.221367 kubelet[3309]: I0710 00:23:26.221337 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e054203-3879-496d-b601-3f8aa77e7cab" path="/var/lib/kubelet/pods/3e054203-3879-496d-b601-3f8aa77e7cab/volumes" Jul 10 00:23:26.222139 kubelet[3309]: I0710 00:23:26.222109 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd23dc4-5134-4024-bfc1-846d29b52788" path="/var/lib/kubelet/pods/4fd23dc4-5134-4024-bfc1-846d29b52788/volumes" Jul 10 00:23:26.235489 systemd[1]: Started sshd@26-172.31.26.174:22-139.178.89.65:58834.service - OpenSSH per-connection server daemon (139.178.89.65:58834). Jul 10 00:23:26.371500 ntpd[1973]: Deleting interface #11 lxc_health, fe80::1cc7:15ff:fe86:d047%8#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs Jul 10 00:23:26.371874 ntpd[1973]: 10 Jul 00:23:26 ntpd[1973]: Deleting interface #11 lxc_health, fe80::1cc7:15ff:fe86:d047%8#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs Jul 10 00:23:26.424852 sshd[5274]: Accepted publickey for core from 139.178.89.65 port 58834 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:26.426249 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:26.431763 systemd-logind[1989]: New session 27 of user core. Jul 10 00:23:26.434345 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:23:27.279741 sshd[5276]: Connection closed by 139.178.89.65 port 58834 Jul 10 00:23:27.280429 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:27.287286 systemd[1]: sshd@26-172.31.26.174:22-139.178.89.65:58834.service: Deactivated successfully. Jul 10 00:23:27.292099 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:23:27.295577 systemd-logind[1989]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:23:27.302098 systemd-logind[1989]: Removed session 27. Jul 10 00:23:27.320898 systemd[1]: Started sshd@27-172.31.26.174:22-139.178.89.65:58844.service - OpenSSH per-connection server daemon (139.178.89.65:58844). Jul 10 00:23:27.331576 kubelet[3309]: E0710 00:23:27.331488 3309 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:23:27.340065 kubelet[3309]: I0710 00:23:27.340029 3309 memory_manager.go:355] "RemoveStaleState removing state" podUID="4fd23dc4-5134-4024-bfc1-846d29b52788" containerName="cilium-operator" Jul 10 00:23:27.340065 kubelet[3309]: I0710 00:23:27.340062 3309 memory_manager.go:355] "RemoveStaleState removing state" podUID="3e054203-3879-496d-b601-3f8aa77e7cab" containerName="cilium-agent" Jul 10 00:23:27.358480 kubelet[3309]: W0710 00:23:27.358448 3309 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-26-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-174' and this object Jul 10 00:23:27.358606 kubelet[3309]: E0710 00:23:27.358496 3309 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" logger="UnhandledError" Jul 10 00:23:27.358606 kubelet[3309]: W0710 00:23:27.358568 3309 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-26-174" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-174' and this object Jul 10 00:23:27.358606 kubelet[3309]: E0710 00:23:27.358583 3309 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" logger="UnhandledError" Jul 10 00:23:27.358779 kubelet[3309]: W0710 00:23:27.358648 3309 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-26-174" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-174' and this object Jul 10 00:23:27.358779 kubelet[3309]: E0710 00:23:27.358664 3309 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" logger="UnhandledError" Jul 10 00:23:27.361471 systemd[1]: Created slice kubepods-burstable-pod0916bbfa_fa85_4db1_9d67_6dd7eeac9ff0.slice - libcontainer container kubepods-burstable-pod0916bbfa_fa85_4db1_9d67_6dd7eeac9ff0.slice. Jul 10 00:23:27.362943 kubelet[3309]: W0710 00:23:27.362911 3309 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-26-174" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-174' and this object Jul 10 00:23:27.363053 kubelet[3309]: E0710 00:23:27.362959 3309 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" logger="UnhandledError" Jul 10 00:23:27.366306 kubelet[3309]: I0710 00:23:27.366270 3309 status_manager.go:890] "Failed to get status for pod" podUID="0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0" pod="kube-system/cilium-6rgkr" err="pods \"cilium-6rgkr\" is forbidden: User \"system:node:ip-172-31-26-174\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-174' and this object" Jul 10 00:23:27.447246 kubelet[3309]: I0710 00:23:27.447204 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-hostproc\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447246 kubelet[3309]: I0710 00:23:27.447270 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-host-proc-sys-kernel\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447311 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-hubble-tls\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447337 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-config-path\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447361 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-xtables-lock\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447383 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-cgroup\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447408 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-bpf-maps\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447491 kubelet[3309]: I0710 00:23:27.447429 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-etc-cni-netd\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447460 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7p8\" (UniqueName: \"kubernetes.io/projected/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-kube-api-access-4v7p8\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447485 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-run\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447517 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cni-path\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447541 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-clustermesh-secrets\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447565 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-host-proc-sys-net\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.447794 kubelet[3309]: I0710 00:23:27.447592 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-lib-modules\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.448024 kubelet[3309]: I0710 00:23:27.447617 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-ipsec-secrets\") pod \"cilium-6rgkr\" (UID: \"0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0\") " pod="kube-system/cilium-6rgkr" Jul 10 00:23:27.546956 sshd[5286]: Accepted publickey for core from 139.178.89.65 port 58844 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:27.547752 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:27.555572 systemd-logind[1989]: New session 28 of user core. Jul 10 00:23:27.558355 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:23:27.682009 sshd[5289]: Connection closed by 139.178.89.65 port 58844 Jul 10 00:23:27.682575 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:27.687029 systemd[1]: sshd@27-172.31.26.174:22-139.178.89.65:58844.service: Deactivated successfully. Jul 10 00:23:27.689282 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:23:27.690080 systemd-logind[1989]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:23:27.691922 systemd-logind[1989]: Removed session 28. Jul 10 00:23:27.715567 systemd[1]: Started sshd@28-172.31.26.174:22-139.178.89.65:58854.service - OpenSSH per-connection server daemon (139.178.89.65:58854). Jul 10 00:23:27.890186 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 58854 ssh2: RSA SHA256:8gcBu3X/zjMKtjKrMkKIwTrYfDQG3sNa69IzDxa0i3U Jul 10 00:23:27.891604 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:27.896671 systemd-logind[1989]: New session 29 of user core. Jul 10 00:23:27.902350 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 00:23:28.549679 kubelet[3309]: E0710 00:23:28.549591 3309 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 10 00:23:28.549679 kubelet[3309]: E0710 00:23:28.549680 3309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-config-path podName:0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0 nodeName:}" failed. No retries permitted until 2025-07-10 00:23:29.049660932 +0000 UTC m=+96.927270336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-cilium-config-path") pod "cilium-6rgkr" (UID: "0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0") : failed to sync configmap cache: timed out waiting for the condition Jul 10 00:23:28.550762 kubelet[3309]: E0710 00:23:28.550578 3309 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 00:23:28.550762 kubelet[3309]: E0710 00:23:28.550674 3309 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6rgkr: failed to sync secret cache: timed out waiting for the condition Jul 10 00:23:28.550762 kubelet[3309]: E0710 00:23:28.550739 3309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-hubble-tls podName:0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0 nodeName:}" failed. No retries permitted until 2025-07-10 00:23:29.050724938 +0000 UTC m=+96.928334341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0-hubble-tls") pod "cilium-6rgkr" (UID: "0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0") : failed to sync secret cache: timed out waiting for the condition Jul 10 00:23:29.167805 containerd[2015]: time="2025-07-10T00:23:29.167763124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rgkr,Uid:0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0,Namespace:kube-system,Attempt:0,}" Jul 10 00:23:29.193757 containerd[2015]: time="2025-07-10T00:23:29.193714260Z" level=info msg="connecting to shim 71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:23:29.225401 systemd[1]: Started cri-containerd-71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f.scope - libcontainer container 71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f. Jul 10 00:23:29.254826 containerd[2015]: time="2025-07-10T00:23:29.254781226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rgkr,Uid:0916bbfa-fa85-4db1-9d67-6dd7eeac9ff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\"" Jul 10 00:23:29.257583 containerd[2015]: time="2025-07-10T00:23:29.257551760Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:23:29.271565 containerd[2015]: time="2025-07-10T00:23:29.271275164Z" level=info msg="Container 720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:29.282051 containerd[2015]: time="2025-07-10T00:23:29.282010692Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\"" Jul 10 00:23:29.283072 containerd[2015]: time="2025-07-10T00:23:29.282990822Z" level=info msg="StartContainer for \"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\"" Jul 10 00:23:29.284774 containerd[2015]: time="2025-07-10T00:23:29.284707379Z" level=info msg="connecting to shim 720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" protocol=ttrpc version=3 Jul 10 00:23:29.305356 systemd[1]: Started cri-containerd-720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3.scope - libcontainer container 720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3. Jul 10 00:23:29.342897 containerd[2015]: time="2025-07-10T00:23:29.342784022Z" level=info msg="StartContainer for \"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\" returns successfully" Jul 10 00:23:29.360266 systemd[1]: cri-containerd-720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3.scope: Deactivated successfully. Jul 10 00:23:29.361264 systemd[1]: cri-containerd-720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3.scope: Consumed 20ms CPU time, 9.7M memory peak, 3.2M read from disk. Jul 10 00:23:29.361959 containerd[2015]: time="2025-07-10T00:23:29.361918461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\" id:\"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\" pid:5367 exited_at:{seconds:1752107009 nanos:361439999}" Jul 10 00:23:29.362289 containerd[2015]: time="2025-07-10T00:23:29.362020801Z" level=info msg="received exit event container_id:\"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\" id:\"720184fe5cbc2c0207d62c51023fad46984cb70610c788f339e3baa4c3a362b3\" pid:5367 exited_at:{seconds:1752107009 nanos:361439999}" Jul 10 00:23:29.572670 containerd[2015]: time="2025-07-10T00:23:29.571902203Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:23:29.583751 containerd[2015]: time="2025-07-10T00:23:29.583676656Z" level=info msg="Container e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:29.593895 containerd[2015]: time="2025-07-10T00:23:29.593862339Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\"" Jul 10 00:23:29.594395 containerd[2015]: time="2025-07-10T00:23:29.594318633Z" level=info msg="StartContainer for \"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\"" Jul 10 00:23:29.595599 containerd[2015]: time="2025-07-10T00:23:29.595552640Z" level=info msg="connecting to shim e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" protocol=ttrpc version=3 Jul 10 00:23:29.618426 systemd[1]: Started cri-containerd-e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9.scope - libcontainer container e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9. Jul 10 00:23:29.656626 containerd[2015]: time="2025-07-10T00:23:29.656587912Z" level=info msg="StartContainer for \"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\" returns successfully" Jul 10 00:23:29.669448 systemd[1]: cri-containerd-e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9.scope: Deactivated successfully. Jul 10 00:23:29.669883 systemd[1]: cri-containerd-e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9.scope: Consumed 21ms CPU time, 7.4M memory peak, 2.1M read from disk. Jul 10 00:23:29.671417 containerd[2015]: time="2025-07-10T00:23:29.670902169Z" level=info msg="received exit event container_id:\"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\" id:\"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\" pid:5412 exited_at:{seconds:1752107009 nanos:670427045}" Jul 10 00:23:29.671958 containerd[2015]: time="2025-07-10T00:23:29.671930045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\" id:\"e72fcd9d1d056b74077d7873a295ae31389d72e13c3f65530d407e9a21c882d9\" pid:5412 exited_at:{seconds:1752107009 nanos:670427045}" Jul 10 00:23:30.575184 containerd[2015]: time="2025-07-10T00:23:30.574932570Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:23:30.590352 containerd[2015]: time="2025-07-10T00:23:30.590304599Z" level=info msg="Container 6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:30.609686 containerd[2015]: time="2025-07-10T00:23:30.609642448Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\"" Jul 10 00:23:30.610415 containerd[2015]: time="2025-07-10T00:23:30.610341482Z" level=info msg="StartContainer for \"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\"" Jul 10 00:23:30.612692 containerd[2015]: time="2025-07-10T00:23:30.612654027Z" level=info msg="connecting to shim 6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" protocol=ttrpc version=3 Jul 10 00:23:30.642347 systemd[1]: Started cri-containerd-6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679.scope - libcontainer container 6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679. Jul 10 00:23:30.687332 containerd[2015]: time="2025-07-10T00:23:30.687252511Z" level=info msg="StartContainer for \"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\" returns successfully" Jul 10 00:23:30.693576 systemd[1]: cri-containerd-6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679.scope: Deactivated successfully. Jul 10 00:23:30.695516 containerd[2015]: time="2025-07-10T00:23:30.695487226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\" id:\"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\" pid:5458 exited_at:{seconds:1752107010 nanos:695190934}" Jul 10 00:23:30.695627 containerd[2015]: time="2025-07-10T00:23:30.695582269Z" level=info msg="received exit event container_id:\"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\" id:\"6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679\" pid:5458 exited_at:{seconds:1752107010 nanos:695190934}" Jul 10 00:23:30.718304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e1446182f6154e02d5e9bbac52bcb9e8a67fe095cee1e34143a7375764e1679-rootfs.mount: Deactivated successfully. Jul 10 00:23:31.582465 containerd[2015]: time="2025-07-10T00:23:31.582427828Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:23:31.593850 containerd[2015]: time="2025-07-10T00:23:31.593326397Z" level=info msg="Container 39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:31.606729 containerd[2015]: time="2025-07-10T00:23:31.606685936Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\"" Jul 10 00:23:31.607454 containerd[2015]: time="2025-07-10T00:23:31.607289549Z" level=info msg="StartContainer for \"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\"" Jul 10 00:23:31.608035 containerd[2015]: time="2025-07-10T00:23:31.608005783Z" level=info msg="connecting to shim 39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" protocol=ttrpc version=3 Jul 10 00:23:31.631356 systemd[1]: Started cri-containerd-39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767.scope - libcontainer container 39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767. Jul 10 00:23:31.673552 systemd[1]: cri-containerd-39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767.scope: Deactivated successfully. Jul 10 00:23:31.675592 containerd[2015]: time="2025-07-10T00:23:31.675544770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\" id:\"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\" pid:5500 exited_at:{seconds:1752107011 nanos:674767286}" Jul 10 00:23:31.677948 containerd[2015]: time="2025-07-10T00:23:31.677800261Z" level=info msg="received exit event container_id:\"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\" id:\"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\" pid:5500 exited_at:{seconds:1752107011 nanos:674767286}" Jul 10 00:23:31.688769 containerd[2015]: time="2025-07-10T00:23:31.688720356Z" level=info msg="StartContainer for \"39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767\" returns successfully" Jul 10 00:23:31.707912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39407827178b582179c1683e5902b73789df66f438d99b0e3c1db1a9dc209767-rootfs.mount: Deactivated successfully. Jul 10 00:23:32.332804 kubelet[3309]: E0710 00:23:32.332760 3309 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:23:32.587375 containerd[2015]: time="2025-07-10T00:23:32.587271601Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:23:32.601347 containerd[2015]: time="2025-07-10T00:23:32.601033681Z" level=info msg="Container 208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:32.617682 containerd[2015]: time="2025-07-10T00:23:32.617637520Z" level=info msg="CreateContainer within sandbox \"71274f4e2fcc042a0e69b786fdad3377df307e5ef21da2c70fa7439ccd51070f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\"" Jul 10 00:23:32.619087 containerd[2015]: time="2025-07-10T00:23:32.618276995Z" level=info msg="StartContainer for \"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\"" Jul 10 00:23:32.619655 containerd[2015]: time="2025-07-10T00:23:32.619623135Z" level=info msg="connecting to shim 208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325" address="unix:///run/containerd/s/157547e00d8beef4b410f44f8371d1f1371ada27b980695067aa7715198fab5b" protocol=ttrpc version=3 Jul 10 00:23:32.654349 systemd[1]: Started cri-containerd-208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325.scope - libcontainer container 208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325. Jul 10 00:23:32.696936 containerd[2015]: time="2025-07-10T00:23:32.696826137Z" level=info msg="StartContainer for \"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" returns successfully" Jul 10 00:23:32.812226 containerd[2015]: time="2025-07-10T00:23:32.812190525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"cdc688af16bb82993f9c27bd5b742e9e9c4ae8c7afa06779848e1901935690ce\" pid:5570 exited_at:{seconds:1752107012 nanos:811827826}" Jul 10 00:23:33.395197 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 00:23:33.606864 kubelet[3309]: I0710 00:23:33.606316 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6rgkr" podStartSLOduration=6.606294753 podStartE2EDuration="6.606294753s" podCreationTimestamp="2025-07-10 00:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:23:33.605945174 +0000 UTC m=+101.483554635" watchObservedRunningTime="2025-07-10 00:23:33.606294753 +0000 UTC m=+101.483904176" Jul 10 00:23:34.355099 containerd[2015]: time="2025-07-10T00:23:34.355045623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"b63a86b7fabb7c0b9c1465a60f9b4f09901c11bb35f8590bfd36e20af79c5ddb\" pid:5645 exit_status:1 exited_at:{seconds:1752107014 nanos:354371865}" Jul 10 00:23:35.219551 kubelet[3309]: E0710 00:23:35.219119 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-s65wj" podUID="198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86" Jul 10 00:23:35.344472 kubelet[3309]: I0710 00:23:35.344423 3309 setters.go:602] "Node became not ready" node="ip-172-31-26-174" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:23:35Z","lastTransitionTime":"2025-07-10T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:23:36.351151 (udev-worker)[6058]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:23:36.352362 systemd-networkd[1734]: lxc_health: Link UP Jul 10 00:23:36.352732 systemd-networkd[1734]: lxc_health: Gained carrier Jul 10 00:23:36.682029 containerd[2015]: time="2025-07-10T00:23:36.681734243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"c9440ddb84e009991c10bfaec2182e2f21098433e574cdf54270eb70e4615690\" pid:6083 exit_status:1 exited_at:{seconds:1752107016 nanos:681181831}" Jul 10 00:23:37.217807 kubelet[3309]: E0710 00:23:37.217751 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-s65wj" podUID="198f58fd-9bb5-4a90-bf21-5a7c5ddf0f86" Jul 10 00:23:37.410361 systemd-networkd[1734]: lxc_health: Gained IPv6LL Jul 10 00:23:39.352325 containerd[2015]: time="2025-07-10T00:23:39.352274877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"10b99486ca3701a34f5b4e9bed9be08778652a5a4da78741760a90e30e19c0d0\" pid:6120 exited_at:{seconds:1752107019 nanos:351880953}" Jul 10 00:23:40.371700 ntpd[1973]: Listen normally on 14 lxc_health [fe80::e4ba:6ff:febd:efe6%14]:123 Jul 10 00:23:40.373106 ntpd[1973]: 10 Jul 00:23:40 ntpd[1973]: Listen normally on 14 lxc_health [fe80::e4ba:6ff:febd:efe6%14]:123 Jul 10 00:23:41.573968 containerd[2015]: time="2025-07-10T00:23:41.573926414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"719fe8432ef09cf21c88787d78911b95a0b146027bff688cf0acd6e9f3ff7f72\" pid:6152 exited_at:{seconds:1752107021 nanos:573153498}" Jul 10 00:23:43.715265 containerd[2015]: time="2025-07-10T00:23:43.715219570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"299f3652d44ac3f0d47748558995c66b6c6994b595b548c6edb2434c0c20b868\" pid:6174 exited_at:{seconds:1752107023 nanos:714714192}" Jul 10 00:23:45.839385 containerd[2015]: time="2025-07-10T00:23:45.839344962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"208194e7ccdca2842300e9d1028d18b3f344bbae515cb82e60fa96e82b8ee325\" id:\"31bbe9d3875919bf3dd811d2eb84db8210e5cbbf2ec112aa340b8d9ed9a7a1d3\" pid:6196 exited_at:{seconds:1752107025 nanos:838368065}" Jul 10 00:23:45.885636 sshd[5298]: Connection closed by 139.178.89.65 port 58854 Jul 10 00:23:45.886930 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:45.890757 systemd[1]: sshd@28-172.31.26.174:22-139.178.89.65:58854.service: Deactivated successfully. Jul 10 00:23:45.892609 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 00:23:45.893739 systemd-logind[1989]: Session 29 logged out. Waiting for processes to exit. Jul 10 00:23:45.895343 systemd-logind[1989]: Removed session 29. Jul 10 00:23:52.207113 containerd[2015]: time="2025-07-10T00:23:52.207073053Z" level=info msg="StopPodSandbox for \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\"" Jul 10 00:23:52.207507 containerd[2015]: time="2025-07-10T00:23:52.207289836Z" level=info msg="TearDown network for sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" successfully" Jul 10 00:23:52.207507 containerd[2015]: time="2025-07-10T00:23:52.207303672Z" level=info msg="StopPodSandbox for \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" returns successfully" Jul 10 00:23:52.207707 containerd[2015]: time="2025-07-10T00:23:52.207683264Z" level=info msg="RemovePodSandbox for \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\"" Jul 10 00:23:52.212587 containerd[2015]: time="2025-07-10T00:23:52.212548598Z" level=info msg="Forcibly stopping sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\"" Jul 10 00:23:52.212731 containerd[2015]: time="2025-07-10T00:23:52.212714275Z" level=info msg="TearDown network for sandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" successfully" Jul 10 00:23:52.218188 containerd[2015]: time="2025-07-10T00:23:52.217530656Z" level=info msg="Ensure that sandbox 1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3 in task-service has been cleanup successfully" Jul 10 00:23:52.221363 containerd[2015]: time="2025-07-10T00:23:52.221328890Z" level=info msg="RemovePodSandbox \"1c3c3a6d87b74100dd2d7b9b2c3eb8a8248bb3808362a1aaed8879a5c68d61c3\" returns successfully" Jul 10 00:23:52.221851 containerd[2015]: time="2025-07-10T00:23:52.221815943Z" level=info msg="StopPodSandbox for \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\"" Jul 10 00:23:52.222041 containerd[2015]: time="2025-07-10T00:23:52.221971099Z" level=info msg="TearDown network for sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" successfully" Jul 10 00:23:52.222041 containerd[2015]: time="2025-07-10T00:23:52.221993743Z" level=info msg="StopPodSandbox for \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" returns successfully" Jul 10 00:23:52.222431 containerd[2015]: time="2025-07-10T00:23:52.222409480Z" level=info msg="RemovePodSandbox for \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\"" Jul 10 00:23:52.222504 containerd[2015]: time="2025-07-10T00:23:52.222437330Z" level=info msg="Forcibly stopping sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\"" Jul 10 00:23:52.222878 containerd[2015]: time="2025-07-10T00:23:52.222777403Z" level=info msg="TearDown network for sandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" successfully" Jul 10 00:23:52.224347 containerd[2015]: time="2025-07-10T00:23:52.224317802Z" level=info msg="Ensure that sandbox add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3 in task-service has been cleanup successfully" Jul 10 00:23:52.227626 containerd[2015]: time="2025-07-10T00:23:52.227597026Z" level=info msg="RemovePodSandbox \"add95e87bd9e7183a8cf5901e7a8823c9a783981783f98e30d3e4efa98abecc3\" returns successfully" Jul 10 00:24:02.883505 systemd[1]: cri-containerd-95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3.scope: Deactivated successfully. Jul 10 00:24:02.883904 systemd[1]: cri-containerd-95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3.scope: Consumed 3.041s CPU time, 68.3M memory peak, 21M read from disk. Jul 10 00:24:02.887596 containerd[2015]: time="2025-07-10T00:24:02.887557979Z" level=info msg="received exit event container_id:\"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\" id:\"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\" pid:3158 exit_status:1 exited_at:{seconds:1752107042 nanos:887269263}" Jul 10 00:24:02.888116 containerd[2015]: time="2025-07-10T00:24:02.887862782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\" id:\"95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3\" pid:3158 exit_status:1 exited_at:{seconds:1752107042 nanos:887269263}" Jul 10 00:24:02.914663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3-rootfs.mount: Deactivated successfully. Jul 10 00:24:03.642840 kubelet[3309]: I0710 00:24:03.642735 3309 scope.go:117] "RemoveContainer" containerID="95d79e0cf8300ec90bc03d00b66de92367adf9d245dda030f7702b919a374ef3" Jul 10 00:24:03.645339 containerd[2015]: time="2025-07-10T00:24:03.645307325Z" level=info msg="CreateContainer within sandbox \"44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 10 00:24:03.657037 containerd[2015]: time="2025-07-10T00:24:03.656316249Z" level=info msg="Container 8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:03.661377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461430911.mount: Deactivated successfully. Jul 10 00:24:03.665006 containerd[2015]: time="2025-07-10T00:24:03.664955589Z" level=info msg="CreateContainer within sandbox \"44a32d05605ba6afcbb2b6a07a73c069f4da254b4577e7252a9fccd7cb8742d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f\"" Jul 10 00:24:03.665472 containerd[2015]: time="2025-07-10T00:24:03.665416417Z" level=info msg="StartContainer for \"8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f\"" Jul 10 00:24:03.666368 containerd[2015]: time="2025-07-10T00:24:03.666344559Z" level=info msg="connecting to shim 8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f" address="unix:///run/containerd/s/75911ad9daa6fe2fbb856562fc86ad7a7b96d89df473909eb931c5b75e9d9f11" protocol=ttrpc version=3 Jul 10 00:24:03.688365 systemd[1]: Started cri-containerd-8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f.scope - libcontainer container 8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f. Jul 10 00:24:03.746115 containerd[2015]: time="2025-07-10T00:24:03.746053568Z" level=info msg="StartContainer for \"8790c02fc74f9d41548957bf3463ed712e5cf11f26114e73c0fc82486b20709f\" returns successfully" Jul 10 00:24:03.802492 kubelet[3309]: E0710 00:24:03.802438 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": context deadline exceeded" Jul 10 00:24:09.526572 systemd[1]: cri-containerd-8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62.scope: Deactivated successfully. Jul 10 00:24:09.528361 systemd[1]: cri-containerd-8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62.scope: Consumed 2.375s CPU time, 32.2M memory peak, 13.5M read from disk. Jul 10 00:24:09.530676 containerd[2015]: time="2025-07-10T00:24:09.530446987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\" id:\"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\" pid:3135 exit_status:1 exited_at:{seconds:1752107049 nanos:528141948}" Jul 10 00:24:09.531450 containerd[2015]: time="2025-07-10T00:24:09.530799525Z" level=info msg="received exit event container_id:\"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\" id:\"8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62\" pid:3135 exit_status:1 exited_at:{seconds:1752107049 nanos:528141948}" Jul 10 00:24:09.553504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62-rootfs.mount: Deactivated successfully. Jul 10 00:24:09.659202 kubelet[3309]: I0710 00:24:09.659152 3309 scope.go:117] "RemoveContainer" containerID="8540192a93ebc1ce96b9bee9b7266118251551e7e80021d7324fa2655edc9d62" Jul 10 00:24:09.661570 containerd[2015]: time="2025-07-10T00:24:09.661520505Z" level=info msg="CreateContainer within sandbox \"72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 10 00:24:09.679992 containerd[2015]: time="2025-07-10T00:24:09.679328133Z" level=info msg="Container 02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:09.689822 containerd[2015]: time="2025-07-10T00:24:09.689774636Z" level=info msg="CreateContainer within sandbox \"72a1e55aee7ae97f9badec8fa173bb59551e09120c5bfdc908e8360351b0305c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9\"" Jul 10 00:24:09.690263 containerd[2015]: time="2025-07-10T00:24:09.690232201Z" level=info msg="StartContainer for \"02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9\"" Jul 10 00:24:09.691283 containerd[2015]: time="2025-07-10T00:24:09.691253361Z" level=info msg="connecting to shim 02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9" address="unix:///run/containerd/s/f2c8a8b95e7c6175ac1c785a6ae0d7a9700e4412a1e418b8180e5d4b728c2d8d" protocol=ttrpc version=3 Jul 10 00:24:09.713398 systemd[1]: Started cri-containerd-02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9.scope - libcontainer container 02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9. Jul 10 00:24:09.764367 containerd[2015]: time="2025-07-10T00:24:09.764314342Z" level=info msg="StartContainer for \"02f56ff4d174ca1d5fc1c8ef44f9d7ee3e7dfe005cd69b8345729bde7e2dadb9\" returns successfully" Jul 10 00:24:13.803915 kubelet[3309]: E0710 00:24:13.803587 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-174?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"