Jul 7 00:14:53.936340 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:14:53.936370 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:53.936380 kernel: BIOS-provided physical RAM map: Jul 7 00:14:53.936386 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:14:53.936393 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 7 00:14:53.936399 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 00:14:53.936407 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 00:14:53.936414 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 00:14:53.936423 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 00:14:53.936430 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 00:14:53.936437 kernel: NX (Execute Disable) protection: active Jul 7 00:14:53.936444 kernel: APIC: Static calls initialized Jul 7 00:14:53.936451 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 7 00:14:53.936458 kernel: extended physical RAM map: Jul 7 00:14:53.936469 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:14:53.936476 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 7 00:14:53.936484 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 7 00:14:53.936491 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 7 00:14:53.936499 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 00:14:53.936506 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 00:14:53.936514 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 00:14:53.936522 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 00:14:53.936529 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 00:14:53.936536 kernel: efi: EFI v2.7 by EDK II Jul 7 00:14:53.936546 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 7 00:14:53.936554 kernel: secureboot: Secure boot disabled Jul 7 00:14:53.936561 kernel: SMBIOS 2.7 present. Jul 7 00:14:53.936569 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 7 00:14:53.936576 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:14:53.936584 kernel: Hypervisor detected: KVM Jul 7 00:14:53.936591 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:14:53.936599 kernel: kvm-clock: using sched offset of 5046473428 cycles Jul 7 00:14:53.936607 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:14:53.936615 kernel: tsc: Detected 2499.996 MHz processor Jul 7 00:14:53.936623 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:14:53.936633 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:14:53.936641 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 7 00:14:53.936649 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:14:53.936656 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:14:53.936664 kernel: Using GB pages for direct mapping Jul 7 00:14:53.936676 kernel: ACPI: Early table checksum verification disabled Jul 7 00:14:53.936686 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 7 00:14:53.936694 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 00:14:53.936703 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 00:14:53.936711 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 7 00:14:53.936719 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 7 00:14:53.936727 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 7 00:14:53.936736 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 00:14:53.936744 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 00:14:53.936754 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 7 00:14:53.936763 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 7 00:14:53.936771 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:14:53.937177 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:14:53.937191 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 7 00:14:53.937199 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 7 00:14:53.937208 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 7 00:14:53.937216 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 7 00:14:53.937224 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 7 00:14:53.937236 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 7 00:14:53.937245 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 7 00:14:53.937254 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 7 00:14:53.937262 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 7 00:14:53.937270 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 7 00:14:53.937278 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 7 00:14:53.937286 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 7 00:14:53.937295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 7 00:14:53.937303 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 00:14:53.937314 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 7 00:14:53.937322 kernel: Zone ranges: Jul 7 00:14:53.937330 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:14:53.937339 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 7 00:14:53.937347 kernel: Normal empty Jul 7 00:14:53.937355 kernel: Device empty Jul 7 00:14:53.937363 kernel: Movable zone start for each node Jul 7 00:14:53.937371 kernel: Early memory node ranges Jul 7 00:14:53.937379 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:14:53.937390 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 7 00:14:53.937399 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 7 00:14:53.937407 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 7 00:14:53.937415 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:14:53.937423 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:14:53.937432 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 00:14:53.937441 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 7 00:14:53.937449 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 00:14:53.937457 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:14:53.937466 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 7 00:14:53.937476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:14:53.937485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:14:53.937493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:14:53.937502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:14:53.937510 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:14:53.937518 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:14:53.937527 kernel: TSC deadline timer available Jul 7 00:14:53.937535 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:14:53.937544 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:14:53.937555 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:14:53.937563 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:14:53.937571 kernel: CPU topo: Num. cores per package: 1 Jul 7 00:14:53.937579 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:14:53.937588 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:14:53.937596 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:14:53.937605 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 7 00:14:53.937613 kernel: Booting paravirtualized kernel on KVM Jul 7 00:14:53.937622 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:14:53.937633 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:14:53.937641 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:14:53.937650 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:14:53.937666 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:14:53.937680 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:14:53.937693 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:14:53.937709 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:53.937723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:14:53.937740 kernel: random: crng init done Jul 7 00:14:53.937753 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:14:53.937766 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:14:53.938813 kernel: Fallback order for Node 0: 0 Jul 7 00:14:53.938840 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 7 00:14:53.938850 kernel: Policy zone: DMA32 Jul 7 00:14:53.938871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:14:53.938881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:14:53.938890 kernel: Kernel/User page tables isolation: enabled Jul 7 00:14:53.938899 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:14:53.938908 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:14:53.938919 kernel: Dynamic Preempt: voluntary Jul 7 00:14:53.938928 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:14:53.938938 kernel: rcu: RCU event tracing is enabled. Jul 7 00:14:53.938947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:14:53.938956 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:14:53.938965 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:14:53.938977 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:14:53.938986 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:14:53.938995 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:14:53.939004 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:53.939013 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:53.939022 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:53.939031 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:14:53.939041 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:14:53.939052 kernel: Console: colour dummy device 80x25 Jul 7 00:14:53.939061 kernel: printk: legacy console [tty0] enabled Jul 7 00:14:53.939070 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:14:53.939079 kernel: ACPI: Core revision 20240827 Jul 7 00:14:53.939087 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 7 00:14:53.939097 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:14:53.939105 kernel: x2apic enabled Jul 7 00:14:53.939114 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:14:53.939123 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:14:53.939133 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 7 00:14:53.939144 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:14:53.939153 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:14:53.939162 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:14:53.939170 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:14:53.939179 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:14:53.939188 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 00:14:53.939197 kernel: RETBleed: Vulnerable Jul 7 00:14:53.939205 kernel: Speculative Store Bypass: Vulnerable Jul 7 00:14:53.939214 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:14:53.939223 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:14:53.939234 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 00:14:53.939243 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:14:53.939251 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:14:53.939260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:14:53.939269 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:14:53.939278 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:14:53.939287 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:14:53.939296 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 00:14:53.939304 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 00:14:53.939313 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 00:14:53.939322 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 00:14:53.939333 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:14:53.939342 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:14:53.939351 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:14:53.939360 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 7 00:14:53.939368 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 7 00:14:53.939377 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 7 00:14:53.939386 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 7 00:14:53.939395 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 7 00:14:53.939403 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:14:53.939412 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:14:53.939421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:14:53.939431 kernel: landlock: Up and running. Jul 7 00:14:53.939440 kernel: SELinux: Initializing. Jul 7 00:14:53.939449 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:14:53.939458 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:14:53.939466 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 7 00:14:53.939475 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 00:14:53.939484 kernel: signal: max sigframe size: 3632 Jul 7 00:14:53.939493 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:14:53.939502 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:14:53.939511 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:14:53.939522 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:14:53.939532 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:14:53.939541 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:14:53.939550 kernel: .... node #0, CPUs: #1 Jul 7 00:14:53.939559 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 00:14:53.939569 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:14:53.939578 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:14:53.939600 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 7 00:14:53.939609 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125188K reserved, 0K cma-reserved) Jul 7 00:14:53.939620 kernel: devtmpfs: initialized Jul 7 00:14:53.939629 kernel: x86/mm: Memory block size: 128MB Jul 7 00:14:53.939638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 7 00:14:53.939647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:14:53.939656 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:14:53.939665 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:14:53.939674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:14:53.939683 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:14:53.939694 kernel: audit: type=2000 audit(1751847292.580:1): state=initialized audit_enabled=0 res=1 Jul 7 00:14:53.939703 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:14:53.939712 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:14:53.939721 kernel: cpuidle: using governor menu Jul 7 00:14:53.939730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:14:53.939739 kernel: dca service started, version 1.12.1 Jul 7 00:14:53.939748 kernel: PCI: Using configuration type 1 for base access Jul 7 00:14:53.939757 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:14:53.939766 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:14:53.939778 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:14:53.940835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:14:53.940847 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:14:53.940857 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:14:53.940866 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:14:53.940875 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:14:53.940885 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 00:14:53.940894 kernel: ACPI: Interpreter enabled Jul 7 00:14:53.940902 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:14:53.940916 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:14:53.940926 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:14:53.940935 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:14:53.940943 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 00:14:53.940952 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:14:53.941146 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:14:53.941258 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:14:53.941360 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:14:53.941377 kernel: acpiphp: Slot [3] registered Jul 7 00:14:53.941386 kernel: acpiphp: Slot [4] registered Jul 7 00:14:53.941396 kernel: acpiphp: Slot [5] registered Jul 7 00:14:53.941405 kernel: acpiphp: Slot [6] registered Jul 7 00:14:53.941414 kernel: acpiphp: Slot [7] registered Jul 7 00:14:53.941423 kernel: acpiphp: Slot [8] registered Jul 7 00:14:53.941432 kernel: acpiphp: Slot [9] registered Jul 7 00:14:53.941441 kernel: acpiphp: Slot [10] registered Jul 7 00:14:53.941450 kernel: acpiphp: Slot [11] registered Jul 7 00:14:53.941461 kernel: acpiphp: Slot [12] registered Jul 7 00:14:53.941470 kernel: acpiphp: Slot [13] registered Jul 7 00:14:53.941479 kernel: acpiphp: Slot [14] registered Jul 7 00:14:53.941488 kernel: acpiphp: Slot [15] registered Jul 7 00:14:53.941497 kernel: acpiphp: Slot [16] registered Jul 7 00:14:53.941507 kernel: acpiphp: Slot [17] registered Jul 7 00:14:53.941520 kernel: acpiphp: Slot [18] registered Jul 7 00:14:53.941529 kernel: acpiphp: Slot [19] registered Jul 7 00:14:53.941541 kernel: acpiphp: Slot [20] registered Jul 7 00:14:53.941553 kernel: acpiphp: Slot [21] registered Jul 7 00:14:53.941562 kernel: acpiphp: Slot [22] registered Jul 7 00:14:53.941571 kernel: acpiphp: Slot [23] registered Jul 7 00:14:53.941580 kernel: acpiphp: Slot [24] registered Jul 7 00:14:53.941589 kernel: acpiphp: Slot [25] registered Jul 7 00:14:53.941598 kernel: acpiphp: Slot [26] registered Jul 7 00:14:53.941606 kernel: acpiphp: Slot [27] registered Jul 7 00:14:53.941615 kernel: acpiphp: Slot [28] registered Jul 7 00:14:53.941624 kernel: acpiphp: Slot [29] registered Jul 7 00:14:53.941634 kernel: acpiphp: Slot [30] registered Jul 7 00:14:53.941645 kernel: acpiphp: Slot [31] registered Jul 7 00:14:53.941654 kernel: PCI host bridge to bus 0000:00 Jul 7 00:14:53.941843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:14:53.942031 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:14:53.942119 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:14:53.942206 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 00:14:53.942287 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 7 00:14:53.942371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:14:53.942481 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:14:53.942730 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:14:53.945293 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 7 00:14:53.945410 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 00:14:53.945504 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 7 00:14:53.945601 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 7 00:14:53.945715 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 7 00:14:53.945870 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 7 00:14:53.945965 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 7 00:14:53.946055 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 7 00:14:53.946154 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:14:53.946246 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 7 00:14:53.946342 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 00:14:53.946430 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:14:53.946528 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 7 00:14:53.946618 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 7 00:14:53.946713 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 7 00:14:53.949476 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 7 00:14:53.949502 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:14:53.949520 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:14:53.949529 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:14:53.949538 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:14:53.949547 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:14:53.949556 kernel: iommu: Default domain type: Translated Jul 7 00:14:53.949565 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:14:53.949574 kernel: efivars: Registered efivars operations Jul 7 00:14:53.949583 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:14:53.949592 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:14:53.949605 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 7 00:14:53.949614 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 7 00:14:53.949623 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 7 00:14:53.949751 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 7 00:14:53.949863 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 7 00:14:53.949957 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:14:53.949969 kernel: vgaarb: loaded Jul 7 00:14:53.949979 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 00:14:53.949991 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 7 00:14:53.950000 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:14:53.950009 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:14:53.950018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:14:53.950028 kernel: pnp: PnP ACPI init Jul 7 00:14:53.950037 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:14:53.950046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:14:53.950055 kernel: NET: Registered PF_INET protocol family Jul 7 00:14:53.950064 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:14:53.950076 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:14:53.950085 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:14:53.950094 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:14:53.950103 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:14:53.950112 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:14:53.950121 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:14:53.950130 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:14:53.950140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:14:53.950149 kernel: NET: Registered PF_XDP protocol family Jul 7 00:14:53.950239 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:14:53.950321 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:14:53.950401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:14:53.950482 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 00:14:53.950562 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 7 00:14:53.950656 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:14:53.950669 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:14:53.950678 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:14:53.950691 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:14:53.950699 kernel: clocksource: Switched to clocksource tsc Jul 7 00:14:53.950708 kernel: Initialise system trusted keyrings Jul 7 00:14:53.950717 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:14:53.950726 kernel: Key type asymmetric registered Jul 7 00:14:53.950735 kernel: Asymmetric key parser 'x509' registered Jul 7 00:14:53.950743 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:14:53.950753 kernel: io scheduler mq-deadline registered Jul 7 00:14:53.950761 kernel: io scheduler kyber registered Jul 7 00:14:53.950773 kernel: io scheduler bfq registered Jul 7 00:14:53.950793 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:14:53.950802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:14:53.950811 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:14:53.950820 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:14:53.950830 kernel: i8042: Warning: Keylock active Jul 7 00:14:53.950839 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:14:53.950848 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:14:53.950954 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 00:14:53.951044 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 00:14:53.951128 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T00:14:53 UTC (1751847293) Jul 7 00:14:53.951211 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 00:14:53.951222 kernel: intel_pstate: CPU model not supported Jul 7 00:14:53.951250 kernel: efifb: probing for efifb Jul 7 00:14:53.951263 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 7 00:14:53.951273 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 00:14:53.951284 kernel: efifb: scrolling: redraw Jul 7 00:14:53.951294 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:14:53.951304 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 00:14:53.951313 kernel: fb0: EFI VGA frame buffer device Jul 7 00:14:53.951323 kernel: pstore: Using crash dump compression: deflate Jul 7 00:14:53.951333 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:14:53.951343 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:14:53.951352 kernel: Segment Routing with IPv6 Jul 7 00:14:53.951362 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:14:53.951371 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:14:53.951383 kernel: Key type dns_resolver registered Jul 7 00:14:53.951392 kernel: IPI shorthand broadcast: enabled Jul 7 00:14:53.951402 kernel: sched_clock: Marking stable (2668003042, 147603088)->(2903388502, -87782372) Jul 7 00:14:53.951411 kernel: registered taskstats version 1 Jul 7 00:14:53.951420 kernel: Loading compiled-in X.509 certificates Jul 7 00:14:53.951430 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:14:53.951439 kernel: Demotion targets for Node 0: null Jul 7 00:14:53.951449 kernel: Key type .fscrypt registered Jul 7 00:14:53.951458 kernel: Key type fscrypt-provisioning registered Jul 7 00:14:53.951470 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:14:53.951480 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:14:53.951489 kernel: ima: No architecture policies found Jul 7 00:14:53.951499 kernel: clk: Disabling unused clocks Jul 7 00:14:53.951508 kernel: Warning: unable to open an initial console. Jul 7 00:14:53.951521 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:14:53.951530 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:14:53.951540 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:14:53.951551 kernel: Run /init as init process Jul 7 00:14:53.951564 kernel: with arguments: Jul 7 00:14:53.951573 kernel: /init Jul 7 00:14:53.951582 kernel: with environment: Jul 7 00:14:53.951592 kernel: HOME=/ Jul 7 00:14:53.951601 kernel: TERM=linux Jul 7 00:14:53.951613 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:14:53.951623 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:14:53.951637 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:14:53.951647 systemd[1]: Detected virtualization amazon. Jul 7 00:14:53.951657 systemd[1]: Detected architecture x86-64. Jul 7 00:14:53.951667 systemd[1]: Running in initrd. Jul 7 00:14:53.951676 systemd[1]: No hostname configured, using default hostname. Jul 7 00:14:53.951689 systemd[1]: Hostname set to . Jul 7 00:14:53.951699 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:14:53.951708 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:14:53.951718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:53.951728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:53.951739 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:14:53.951749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:14:53.951759 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:14:53.951772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:14:53.951802 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:14:53.951813 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:14:53.951823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:53.951833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:53.951843 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:14:53.951852 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:14:53.951865 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:14:53.951875 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:14:53.951885 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:14:53.951894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:14:53.951904 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:14:53.951914 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:14:53.951924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:53.951934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:53.951947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:53.951957 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:14:53.951967 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:14:53.951977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:14:53.951986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:14:53.951996 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:14:53.952006 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:14:53.952016 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:14:53.952026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:14:53.952038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:53.952048 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:14:53.952058 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:53.952068 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:14:53.952107 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 00:14:53.952133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:14:53.952143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:53.952154 systemd-journald[207]: Journal started Jul 7 00:14:53.952179 systemd-journald[207]: Runtime Journal (/run/log/journal/ec28605d1aa97c7fa57971a3f6872eb8) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:14:53.935928 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 00:14:53.957309 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:14:53.959881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:14:53.964923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:14:53.970949 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:14:53.972804 kernel: Bridge firewalling registered Jul 7 00:14:53.972292 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 00:14:53.973125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:53.976964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:53.978146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:14:53.983416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:14:53.991777 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:14:53.994170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:54.002408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:54.005878 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:14:54.006959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:14:54.010075 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:14:54.011452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:54.013974 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:14:54.038760 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:54.067696 systemd-resolved[245]: Positive Trust Anchors: Jul 7 00:14:54.068804 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:14:54.068876 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:14:54.078088 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 7 00:14:54.079528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:14:54.080643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:54.134819 kernel: SCSI subsystem initialized Jul 7 00:14:54.144811 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:14:54.156831 kernel: iscsi: registered transport (tcp) Jul 7 00:14:54.179900 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:14:54.179989 kernel: QLogic iSCSI HBA Driver Jul 7 00:14:54.199587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:14:54.220011 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:54.223535 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:14:54.268166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:14:54.270424 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:14:54.323824 kernel: raid6: avx512x4 gen() 17740 MB/s Jul 7 00:14:54.341828 kernel: raid6: avx512x2 gen() 17749 MB/s Jul 7 00:14:54.359819 kernel: raid6: avx512x1 gen() 17898 MB/s Jul 7 00:14:54.377817 kernel: raid6: avx2x4 gen() 17513 MB/s Jul 7 00:14:54.395831 kernel: raid6: avx2x2 gen() 17816 MB/s Jul 7 00:14:54.414149 kernel: raid6: avx2x1 gen() 13542 MB/s Jul 7 00:14:54.414210 kernel: raid6: using algorithm avx512x1 gen() 17898 MB/s Jul 7 00:14:54.433367 kernel: raid6: .... xor() 21230 MB/s, rmw enabled Jul 7 00:14:54.433439 kernel: raid6: using avx512x2 recovery algorithm Jul 7 00:14:54.454823 kernel: xor: automatically using best checksumming function avx Jul 7 00:14:54.624819 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:14:54.632396 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:14:54.634708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:54.662936 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 7 00:14:54.669604 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:54.673941 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:14:54.703404 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 7 00:14:54.730420 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:14:54.732903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:14:54.788945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:54.793097 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:14:54.848809 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 00:14:54.849049 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 00:14:54.857808 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 7 00:14:54.869979 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:4c:58:3a:d8:b7 Jul 7 00:14:54.872803 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:14:54.876812 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:14:54.886909 (udev-worker)[511]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:14:54.892821 kernel: AES CTR mode by8 optimization enabled Jul 7 00:14:54.898003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:54.898203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:54.899686 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:54.901203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:54.913854 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:54.915009 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 00:14:54.919347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:54.919974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:54.923147 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 00:14:54.923998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:54.939021 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 00:14:54.945518 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:14:54.945587 kernel: GPT:9289727 != 16777215 Jul 7 00:14:54.945600 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:14:54.947889 kernel: GPT:9289727 != 16777215 Jul 7 00:14:54.947938 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:14:54.949175 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:54.954143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:54.987805 kernel: nvme nvme0: using unchecked data buffer Jul 7 00:14:55.088351 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 00:14:55.099505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 00:14:55.100370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:14:55.119459 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 00:14:55.120100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 00:14:55.131862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:14:55.132629 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:14:55.134025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:55.135213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:14:55.137010 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:14:55.140032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:14:55.157805 disk-uuid[691]: Primary Header is updated. Jul 7 00:14:55.157805 disk-uuid[691]: Secondary Entries is updated. Jul 7 00:14:55.157805 disk-uuid[691]: Secondary Header is updated. Jul 7 00:14:55.165813 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:55.167827 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:14:56.181989 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:56.183810 disk-uuid[693]: The operation has completed successfully. Jul 7 00:14:56.313440 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:14:56.313593 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:14:56.354898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:14:56.381135 sh[959]: Success Jul 7 00:14:56.404429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:14:56.404545 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:14:56.405380 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:14:56.417854 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:14:56.516110 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:14:56.520892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:14:56.534257 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:14:56.561380 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:14:56.561464 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (982) Jul 7 00:14:56.566810 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:14:56.566881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:56.569625 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:14:56.697516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:14:56.698811 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:14:56.699408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:14:56.700419 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:14:56.702136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:14:56.747819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Jul 7 00:14:56.756187 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:56.756287 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:56.756311 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:56.777811 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:56.779874 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:14:56.783963 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:14:56.810358 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:14:56.813047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:14:56.865828 systemd-networkd[1151]: lo: Link UP Jul 7 00:14:56.865839 systemd-networkd[1151]: lo: Gained carrier Jul 7 00:14:56.867624 systemd-networkd[1151]: Enumeration completed Jul 7 00:14:56.867759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:14:56.868549 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:56.868555 systemd-networkd[1151]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:14:56.868950 systemd[1]: Reached target network.target - Network. Jul 7 00:14:56.872117 systemd-networkd[1151]: eth0: Link UP Jul 7 00:14:56.872122 systemd-networkd[1151]: eth0: Gained carrier Jul 7 00:14:56.872140 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:56.886920 systemd-networkd[1151]: eth0: DHCPv4 address 172.31.30.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:14:57.300298 ignition[1120]: Ignition 2.21.0 Jul 7 00:14:57.300314 ignition[1120]: Stage: fetch-offline Jul 7 00:14:57.300552 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:57.300564 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:57.301314 ignition[1120]: Ignition finished successfully Jul 7 00:14:57.303347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:14:57.305935 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:14:57.332579 ignition[1161]: Ignition 2.21.0 Jul 7 00:14:57.332598 ignition[1161]: Stage: fetch Jul 7 00:14:57.333446 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:57.333463 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:57.333611 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:57.354324 ignition[1161]: PUT result: OK Jul 7 00:14:57.356532 ignition[1161]: parsed url from cmdline: "" Jul 7 00:14:57.356541 ignition[1161]: no config URL provided Jul 7 00:14:57.356548 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:14:57.356560 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:14:57.356579 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:57.357309 ignition[1161]: PUT result: OK Jul 7 00:14:57.357376 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 00:14:57.358253 ignition[1161]: GET result: OK Jul 7 00:14:57.358366 ignition[1161]: parsing config with SHA512: c78b2f78a99cd11167e6bbff23426d109297059bd7e7fbf8db02c18f256cc363c34d050e18621fcba5f9417f13e0a06fb8338899e56ac6919974673afb19fb6b Jul 7 00:14:57.362528 unknown[1161]: fetched base config from "system" Jul 7 00:14:57.362541 unknown[1161]: fetched base config from "system" Jul 7 00:14:57.363075 ignition[1161]: fetch: fetch complete Jul 7 00:14:57.362547 unknown[1161]: fetched user config from "aws" Jul 7 00:14:57.363080 ignition[1161]: fetch: fetch passed Jul 7 00:14:57.363127 ignition[1161]: Ignition finished successfully Jul 7 00:14:57.365524 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:14:57.367075 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:14:57.396945 ignition[1167]: Ignition 2.21.0 Jul 7 00:14:57.396961 ignition[1167]: Stage: kargs Jul 7 00:14:57.397373 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:57.397386 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:57.397508 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:57.398503 ignition[1167]: PUT result: OK Jul 7 00:14:57.405586 ignition[1167]: kargs: kargs passed Jul 7 00:14:57.406381 ignition[1167]: Ignition finished successfully Jul 7 00:14:57.407807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:14:57.410077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:14:57.439385 ignition[1174]: Ignition 2.21.0 Jul 7 00:14:57.439401 ignition[1174]: Stage: disks Jul 7 00:14:57.439871 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:57.439888 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:57.440009 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:57.441764 ignition[1174]: PUT result: OK Jul 7 00:14:57.446050 ignition[1174]: disks: disks passed Jul 7 00:14:57.446132 ignition[1174]: Ignition finished successfully Jul 7 00:14:57.447759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:14:57.448760 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:14:57.449497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:14:57.449987 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:14:57.450526 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:14:57.451084 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:14:57.452771 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:14:57.507280 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:14:57.510441 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:14:57.512368 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:14:57.657806 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:14:57.658914 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:14:57.659855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:14:57.661587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:57.663888 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:14:57.665179 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:14:57.665238 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:14:57.665262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:14:57.674473 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:14:57.676759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:14:57.691832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Jul 7 00:14:57.696765 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:57.696884 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:57.696907 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:57.707459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:58.061953 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:14:58.106727 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:14:58.160440 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:14:58.182987 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:14:58.527971 systemd-networkd[1151]: eth0: Gained IPv6LL Jul 7 00:14:58.546132 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:14:58.552409 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:14:58.556973 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:14:58.574124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:14:58.577156 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:58.606593 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:14:58.610905 ignition[1314]: INFO : Ignition 2.21.0 Jul 7 00:14:58.610905 ignition[1314]: INFO : Stage: mount Jul 7 00:14:58.612530 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:58.612530 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:58.612530 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:58.614359 ignition[1314]: INFO : PUT result: OK Jul 7 00:14:58.617829 ignition[1314]: INFO : mount: mount passed Jul 7 00:14:58.618523 ignition[1314]: INFO : Ignition finished successfully Jul 7 00:14:58.620117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:14:58.621908 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:14:58.660859 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:58.702819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Jul 7 00:14:58.706100 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:58.706170 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:58.708694 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:58.716080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:58.746485 ignition[1343]: INFO : Ignition 2.21.0 Jul 7 00:14:58.746485 ignition[1343]: INFO : Stage: files Jul 7 00:14:58.748131 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:58.748131 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:14:58.748131 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:14:58.749816 ignition[1343]: INFO : PUT result: OK Jul 7 00:14:58.753444 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:14:58.754259 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:14:58.754259 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:14:58.769175 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:14:58.770144 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:14:58.770144 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:14:58.769593 unknown[1343]: wrote ssh authorized keys file for user: core Jul 7 00:14:58.783287 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:14:58.784378 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:14:58.861960 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:14:59.100448 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:14:59.100448 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:14:59.102619 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:14:59.550416 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:14:59.967912 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:14:59.967912 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:59.980405 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:15:00.697822 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:15:01.659860 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:15:01.659860 ignition[1343]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:15:01.668721 ignition[1343]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:15:01.677002 ignition[1343]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:15:01.677002 ignition[1343]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:15:01.677002 ignition[1343]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:15:01.685779 ignition[1343]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:15:01.685779 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:15:01.685779 ignition[1343]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:15:01.685779 ignition[1343]: INFO : files: files passed Jul 7 00:15:01.685779 ignition[1343]: INFO : Ignition finished successfully Jul 7 00:15:01.682271 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:15:01.690290 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:15:01.705824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:15:01.743096 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:15:01.744428 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:15:01.792351 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:15:01.792351 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:15:01.813177 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:15:01.817677 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:15:01.823522 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:15:01.830425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:15:01.986175 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:15:01.986358 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:15:01.991503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:15:01.993444 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:15:01.996261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:15:02.001353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:15:02.069030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:15:02.076908 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:15:02.133320 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:15:02.134317 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:15:02.136007 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:15:02.136904 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:15:02.137085 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:15:02.140419 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:15:02.141415 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:15:02.142182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:15:02.143351 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:15:02.144258 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:15:02.145038 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:15:02.145962 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:15:02.146711 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:15:02.147840 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:15:02.149053 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:15:02.149884 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:15:02.150574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:15:02.150771 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:15:02.152119 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:15:02.152957 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:15:02.153826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:15:02.154015 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:15:02.154672 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:15:02.154945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:15:02.156333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:15:02.156548 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:15:02.157320 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:15:02.157534 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:15:02.161077 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:15:02.162923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:15:02.163164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:15:02.167128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:15:02.168910 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:15:02.169197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:15:02.170409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:15:02.170631 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:15:02.179095 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:15:02.179231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:15:02.208279 ignition[1398]: INFO : Ignition 2.21.0 Jul 7 00:15:02.208279 ignition[1398]: INFO : Stage: umount Jul 7 00:15:02.210552 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:15:02.210552 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:15:02.210552 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:15:02.217596 ignition[1398]: INFO : PUT result: OK Jul 7 00:15:02.217596 ignition[1398]: INFO : umount: umount passed Jul 7 00:15:02.217596 ignition[1398]: INFO : Ignition finished successfully Jul 7 00:15:02.219552 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:15:02.219707 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:15:02.222179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:15:02.223067 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:15:02.223151 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:15:02.225172 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:15:02.225260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:15:02.225874 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:15:02.225953 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:15:02.226470 systemd[1]: Stopped target network.target - Network. Jul 7 00:15:02.227949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:15:02.228159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:15:02.228682 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:15:02.229536 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:15:02.229676 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:15:02.230387 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:15:02.231299 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:15:02.232955 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:15:02.233018 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:15:02.233706 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:15:02.233763 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:15:02.234389 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:15:02.234477 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:15:02.235354 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:15:02.235420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:15:02.236167 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:15:02.236714 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:15:02.238010 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:15:02.238147 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:15:02.239517 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:15:02.241475 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:15:02.243750 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:15:02.243903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:15:02.248406 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:15:02.248760 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:15:02.248984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:15:02.251110 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:15:02.252556 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:15:02.253364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:15:02.253428 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:15:02.255664 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:15:02.259594 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:15:02.259717 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:15:02.260304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:15:02.260374 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:15:02.263204 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:15:02.263285 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:15:02.263964 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:15:02.264039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:15:02.264863 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:15:02.271225 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:15:02.271351 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:15:02.285443 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:15:02.286724 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:15:02.289874 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:15:02.290036 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:15:02.292698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:15:02.292818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:15:02.293877 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:15:02.293942 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:15:02.294547 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:15:02.294627 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:15:02.295722 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:15:02.295938 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:15:02.296992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:15:02.297071 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:15:02.299443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:15:02.300849 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:15:02.300938 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:15:02.302573 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:15:02.302644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:15:02.304956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:15:02.305035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:15:02.308637 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:15:02.308735 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:15:02.309021 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:15:02.327504 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:15:02.327686 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:15:02.329546 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:15:02.331410 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:15:02.360752 systemd[1]: Switching root. Jul 7 00:15:02.412134 systemd-journald[207]: Journal stopped Jul 7 00:15:04.689508 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 00:15:04.689612 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:15:04.689637 kernel: SELinux: policy capability open_perms=1 Jul 7 00:15:04.689657 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:15:04.689676 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:15:04.689697 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:15:04.689722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:15:04.689746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:15:04.689765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:15:04.696843 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:15:04.696890 kernel: audit: type=1403 audit(1751847303.022:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:15:04.696915 systemd[1]: Successfully loaded SELinux policy in 89.533ms. Jul 7 00:15:04.696956 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.370ms. Jul 7 00:15:04.696987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:15:04.697016 systemd[1]: Detected virtualization amazon. Jul 7 00:15:04.697037 systemd[1]: Detected architecture x86-64. Jul 7 00:15:04.697057 systemd[1]: Detected first boot. Jul 7 00:15:04.697079 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:15:04.697101 zram_generator::config[1441]: No configuration found. Jul 7 00:15:04.697124 kernel: Guest personality initialized and is inactive Jul 7 00:15:04.697144 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:15:04.697164 kernel: Initialized host personality Jul 7 00:15:04.697184 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:15:04.697209 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:15:04.697231 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:15:04.697258 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:15:04.697281 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:15:04.697301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:15:04.697323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:15:04.697343 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:15:04.697363 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:15:04.697386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:15:04.697407 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:15:04.697426 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:15:04.697444 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:15:04.697463 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:15:04.697481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:15:04.697500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:15:04.697519 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:15:04.697539 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:15:04.697577 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:15:04.697599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:15:04.697620 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:15:04.697640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:15:04.697661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:15:04.697681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:15:04.697701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:15:04.697722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:15:04.697744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:15:04.697764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:15:04.697810 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:15:04.697831 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:15:04.697851 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:15:04.697871 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:15:04.697890 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:15:04.697910 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:15:04.697930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:15:04.697954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:15:04.697972 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:15:04.697993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:15:04.698013 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:15:04.698034 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:15:04.698053 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:15:04.698073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:04.698093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:15:04.698115 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:15:04.698137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:15:04.698165 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:15:04.698184 systemd[1]: Reached target machines.target - Containers. Jul 7 00:15:04.698203 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:15:04.698219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:15:04.698237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:15:04.698255 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:15:04.698273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:15:04.698294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:15:04.698312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:15:04.698333 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:15:04.698357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:15:04.698381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:15:04.698402 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:15:04.698419 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:15:04.698437 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:15:04.698455 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:15:04.698480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:15:04.698500 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:15:04.698519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:15:04.698537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:15:04.698555 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:15:04.698573 kernel: loop: module loaded Jul 7 00:15:04.698594 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:15:04.698620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:15:04.698639 kernel: fuse: init (API version 7.41) Jul 7 00:15:04.698657 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:15:04.698681 systemd[1]: Stopped verity-setup.service. Jul 7 00:15:04.698704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:04.698723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:15:04.698743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:15:04.698764 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:15:04.705769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:15:04.705837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:15:04.705859 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:15:04.705881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:15:04.705909 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:15:04.705928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:15:04.705949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:15:04.705969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:15:04.705990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:15:04.706013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:15:04.706033 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:15:04.706054 kernel: ACPI: bus type drm_connector registered Jul 7 00:15:04.706076 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:15:04.706095 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:15:04.706115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:15:04.706134 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:15:04.706155 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:15:04.706175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:15:04.706195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:15:04.706216 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:15:04.706237 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:15:04.706261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:15:04.706331 systemd-journald[1520]: Collecting audit messages is disabled. Jul 7 00:15:04.706373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:15:04.706392 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:15:04.706417 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:15:04.706437 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:15:04.706458 systemd-journald[1520]: Journal started Jul 7 00:15:04.706495 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec28605d1aa97c7fa57971a3f6872eb8) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:15:04.256478 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:15:04.269356 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 00:15:04.270096 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:15:04.714955 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:15:04.715024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:15:04.724559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:15:04.724666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:15:04.731813 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:15:04.736815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:15:04.747807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:15:04.758864 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:15:04.767698 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:15:04.771184 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:15:04.775257 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:15:04.776709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:15:04.779429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:15:04.819072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:15:04.826978 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:15:04.831118 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:15:04.833076 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:15:04.842367 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:15:04.845851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:15:04.864125 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec28605d1aa97c7fa57971a3f6872eb8 is 56.873ms for 1022 entries. Jul 7 00:15:04.864125 systemd-journald[1520]: System Journal (/var/log/journal/ec28605d1aa97c7fa57971a3f6872eb8) is 8M, max 195.6M, 187.6M free. Jul 7 00:15:04.945952 systemd-journald[1520]: Received client request to flush runtime journal. Jul 7 00:15:04.946040 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 00:15:04.892485 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:15:04.927813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:15:04.949951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:15:04.954814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:15:04.958963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:15:05.016815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:15:05.038748 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Jul 7 00:15:05.039223 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Jul 7 00:15:05.046895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:15:05.053828 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 00:15:05.168871 kernel: loop2: detected capacity change from 0 to 72352 Jul 7 00:15:05.240820 kernel: loop3: detected capacity change from 0 to 224512 Jul 7 00:15:05.275310 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:15:05.300825 kernel: loop4: detected capacity change from 0 to 146240 Jul 7 00:15:05.336352 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 00:15:05.352818 kernel: loop6: detected capacity change from 0 to 72352 Jul 7 00:15:05.377458 kernel: loop7: detected capacity change from 0 to 224512 Jul 7 00:15:05.409304 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 00:15:05.410747 (sd-merge)[1598]: Merged extensions into '/usr'. Jul 7 00:15:05.417472 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:15:05.417823 systemd[1]: Reloading... Jul 7 00:15:05.562834 zram_generator::config[1624]: No configuration found. Jul 7 00:15:05.731490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:05.906370 systemd[1]: Reloading finished in 487 ms. Jul 7 00:15:05.929604 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:15:05.936988 systemd[1]: Starting ensure-sysext.service... Jul 7 00:15:05.940762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:15:05.966894 systemd[1]: Reload requested from client PID 1675 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:15:05.966914 systemd[1]: Reloading... Jul 7 00:15:05.995462 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:15:05.995504 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:15:05.995908 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:15:05.996290 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:15:05.999658 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:15:06.000341 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 7 00:15:06.000830 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 7 00:15:06.014413 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:15:06.014434 systemd-tmpfiles[1676]: Skipping /boot Jul 7 00:15:06.038697 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:15:06.038714 systemd-tmpfiles[1676]: Skipping /boot Jul 7 00:15:06.140825 zram_generator::config[1704]: No configuration found. Jul 7 00:15:06.324466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:06.421053 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:15:06.426720 systemd[1]: Reloading finished in 457 ms. Jul 7 00:15:06.443534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:15:06.444495 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:15:06.459041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:15:06.471314 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:15:06.475127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:15:06.480408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:15:06.499014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:15:06.509239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:15:06.514170 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:15:06.518706 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.519033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:15:06.522196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:15:06.528220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:15:06.533938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:15:06.534702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:15:06.534914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:15:06.535065 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.540663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.540970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:15:06.541213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:15:06.541351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:15:06.541497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.550852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:15:06.560186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.560616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:15:06.566503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:15:06.568364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:15:06.568580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:15:06.568890 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:15:06.570297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:15:06.581931 systemd[1]: Finished ensure-sysext.service. Jul 7 00:15:06.609712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:15:06.621452 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:15:06.630725 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:15:06.645161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:15:06.646999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:15:06.657519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:15:06.657848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:15:06.659039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:15:06.662473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:15:06.664194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:15:06.665595 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:15:06.666531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:15:06.669170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:15:06.692042 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:15:06.700366 systemd-udevd[1763]: Using default interface naming scheme 'v255'. Jul 7 00:15:06.715804 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:15:06.717905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:15:06.720033 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:15:06.739718 augenrules[1803]: No rules Jul 7 00:15:06.743326 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:15:06.744174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:15:06.773233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:15:06.781988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:15:06.912327 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:15:06.915541 (udev-worker)[1826]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:15:06.951518 systemd-resolved[1762]: Positive Trust Anchors: Jul 7 00:15:06.951535 systemd-resolved[1762]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:15:06.951596 systemd-resolved[1762]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:15:06.960586 systemd-resolved[1762]: Defaulting to hostname 'linux'. Jul 7 00:15:06.964864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:15:06.965629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:15:06.967031 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:15:06.967767 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:15:06.968379 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:15:06.968934 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:15:06.969728 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:15:06.972611 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:15:06.973232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:15:06.973827 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:15:06.973873 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:15:06.974386 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:15:06.977161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:15:06.980235 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:15:06.985683 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:15:06.987716 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:15:06.988388 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:15:06.994708 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:15:06.996526 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:15:06.999317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:15:07.001980 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:15:07.003328 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:15:07.004342 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:15:07.004390 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:15:07.009074 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:15:07.013067 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:15:07.019084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:15:07.024038 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:15:07.028058 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:15:07.028716 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:15:07.035115 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:15:07.041494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:15:07.048167 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 00:15:07.065435 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:15:07.077050 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 00:15:07.108147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:15:07.113090 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:15:07.132144 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:15:07.138431 jq[1847]: false Jul 7 00:15:07.149811 extend-filesystems[1848]: Found /dev/nvme0n1p6 Jul 7 00:15:07.158416 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:15:07.175652 oslogin_cache_refresh[1849]: Refreshing passwd entry cache Jul 7 00:15:07.208855 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing passwd entry cache Jul 7 00:15:07.209185 extend-filesystems[1848]: Found /dev/nvme0n1p9 Jul 7 00:15:07.209185 extend-filesystems[1848]: Checking size of /dev/nvme0n1p9 Jul 7 00:15:07.159553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:15:07.169126 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:15:07.182972 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:15:07.186871 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:15:07.228053 update_engine[1864]: I20250707 00:15:07.209892 1864 main.cc:92] Flatcar Update Engine starting Jul 7 00:15:07.188430 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:15:07.189581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:15:07.228705 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:15:07.229882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:15:07.233856 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting users, quitting Jul 7 00:15:07.233856 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:15:07.233856 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing group entry cache Jul 7 00:15:07.232026 oslogin_cache_refresh[1849]: Failure getting users, quitting Jul 7 00:15:07.232052 oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:15:07.232118 oslogin_cache_refresh[1849]: Refreshing group entry cache Jul 7 00:15:07.243773 jq[1866]: true Jul 7 00:15:07.245509 systemd-networkd[1813]: lo: Link UP Jul 7 00:15:07.245800 systemd-networkd[1813]: lo: Gained carrier Jul 7 00:15:07.247219 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting groups, quitting Jul 7 00:15:07.247219 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:15:07.246995 oslogin_cache_refresh[1849]: Failure getting groups, quitting Jul 7 00:15:07.247015 oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:15:07.251458 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:15:07.251798 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:15:07.253380 systemd-networkd[1813]: Enumeration completed Jul 7 00:15:07.254266 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:15:07.258205 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:15:07.258217 systemd-networkd[1813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:15:07.259926 systemd[1]: Reached target network.target - Network. Jul 7 00:15:07.265904 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:15:07.266140 systemd-networkd[1813]: eth0: Link UP Jul 7 00:15:07.267043 systemd-networkd[1813]: eth0: Gained carrier Jul 7 00:15:07.268871 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:15:07.271771 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:15:07.276969 extend-filesystems[1848]: Resized partition /dev/nvme0n1p9 Jul 7 00:15:07.278488 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:15:07.283996 systemd-networkd[1813]: eth0: DHCPv4 address 172.31.30.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:15:07.287107 extend-filesystems[1892]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:15:07.292873 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:15:07.313811 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 00:15:07.351598 jq[1881]: true Jul 7 00:15:07.387816 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 00:15:07.388645 dbus-daemon[1845]: [system] SELinux support is enabled Jul 7 00:15:07.390238 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:15:07.398676 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:15:07.398722 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:15:07.400568 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:15:07.400612 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:15:07.410443 extend-filesystems[1892]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 00:15:07.410443 extend-filesystems[1892]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:15:07.410443 extend-filesystems[1892]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 00:15:07.414622 extend-filesystems[1848]: Resized filesystem in /dev/nvme0n1p9 Jul 7 00:15:07.412694 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:15:07.416132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:15:07.437849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:15:07.441217 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1813 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 00:15:07.444014 update_engine[1864]: I20250707 00:15:07.443920 1864 update_check_scheduler.cc:74] Next update check in 2m8s Jul 7 00:15:07.453932 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 00:15:07.458003 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:15:07.458318 (ntainerd)[1908]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:15:07.461866 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 00:15:07.463072 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:15:07.470474 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:15:07.472052 tar[1876]: linux-amd64/LICENSE Jul 7 00:15:07.472052 tar[1876]: linux-amd64/helm Jul 7 00:15:07.475009 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:15:07.478581 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:15:07.478931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:15:07.512809 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 7 00:15:07.545175 coreos-metadata[1844]: Jul 07 00:15:07.543 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:15:07.545593 bash[1932]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:15:07.549924 coreos-metadata[1844]: Jul 07 00:15:07.549 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 00:15:07.548196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:15:07.563560 coreos-metadata[1844]: Jul 07 00:15:07.557 INFO Fetch successful Jul 7 00:15:07.559195 systemd[1]: Starting sshkeys.service... Jul 7 00:15:07.565687 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:15:07.565776 coreos-metadata[1844]: Jul 07 00:15:07.564 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 00:15:07.573961 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 00:15:07.575647 coreos-metadata[1844]: Jul 07 00:15:07.575 INFO Fetch successful Jul 7 00:15:07.575647 coreos-metadata[1844]: Jul 07 00:15:07.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 00:15:07.576405 coreos-metadata[1844]: Jul 07 00:15:07.576 INFO Fetch successful Jul 7 00:15:07.576405 coreos-metadata[1844]: Jul 07 00:15:07.576 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 00:15:07.597467 coreos-metadata[1844]: Jul 07 00:15:07.597 INFO Fetch successful Jul 7 00:15:07.597467 coreos-metadata[1844]: Jul 07 00:15:07.597 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 00:15:07.600022 coreos-metadata[1844]: Jul 07 00:15:07.599 INFO Fetch failed with 404: resource not found Jul 7 00:15:07.600147 coreos-metadata[1844]: Jul 07 00:15:07.600 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 00:15:07.605114 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:15:07.610880 coreos-metadata[1844]: Jul 07 00:15:07.608 INFO Fetch successful Jul 7 00:15:07.610880 coreos-metadata[1844]: Jul 07 00:15:07.608 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 00:15:07.610100 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:15:07.613292 coreos-metadata[1844]: Jul 07 00:15:07.613 INFO Fetch successful Jul 7 00:15:07.613292 coreos-metadata[1844]: Jul 07 00:15:07.613 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 00:15:07.621662 coreos-metadata[1844]: Jul 07 00:15:07.618 INFO Fetch successful Jul 7 00:15:07.621662 coreos-metadata[1844]: Jul 07 00:15:07.618 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 00:15:07.621662 coreos-metadata[1844]: Jul 07 00:15:07.620 INFO Fetch successful Jul 7 00:15:07.621662 coreos-metadata[1844]: Jul 07 00:15:07.620 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 00:15:07.622812 coreos-metadata[1844]: Jul 07 00:15:07.622 INFO Fetch successful Jul 7 00:15:07.701718 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:15:07.703952 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:15:07.874591 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 00:15:07.899062 coreos-metadata[1935]: Jul 07 00:15:07.898 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:15:07.899513 coreos-metadata[1935]: Jul 07 00:15:07.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 00:15:07.901917 coreos-metadata[1935]: Jul 07 00:15:07.901 INFO Fetch successful Jul 7 00:15:07.901917 coreos-metadata[1935]: Jul 07 00:15:07.901 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 00:15:07.904236 coreos-metadata[1935]: Jul 07 00:15:07.902 INFO Fetch successful Jul 7 00:15:07.912130 unknown[1935]: wrote ssh authorized keys file for user: core Jul 7 00:15:07.965924 locksmithd[1928]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:15:07.983494 update-ssh-keys[1959]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:15:07.984221 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:15:07.996477 systemd[1]: Finished sshkeys.service. Jul 7 00:15:08.115473 sshd_keygen[1894]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:15:08.130246 containerd[1908]: time="2025-07-07T00:15:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:15:08.139203 containerd[1908]: time="2025-07-07T00:15:08.139142466Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:15:08.172376 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: ---------------------------------------------------- Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: corporation. Support and training for ntp-4 are Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: available at https://www.nwtime.org/support Jul 7 00:15:08.175230 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: ---------------------------------------------------- Jul 7 00:15:08.172415 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:15:08.172426 ntpd[1851]: ---------------------------------------------------- Jul 7 00:15:08.172436 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:15:08.172446 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:15:08.172456 ntpd[1851]: corporation. Support and training for ntp-4 are Jul 7 00:15:08.172465 ntpd[1851]: available at https://www.nwtime.org/support Jul 7 00:15:08.172475 ntpd[1851]: ---------------------------------------------------- Jul 7 00:15:08.180554 ntpd[1851]: proto: precision = 0.064 usec (-24) Jul 7 00:15:08.180921 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: proto: precision = 0.064 usec (-24) Jul 7 00:15:08.181313 ntpd[1851]: basedate set to 2025-06-24 Jul 7 00:15:08.185128 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: basedate set to 2025-06-24 Jul 7 00:15:08.185128 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: gps base set to 2025-06-29 (week 2373) Jul 7 00:15:08.183557 ntpd[1851]: gps base set to 2025-06-29 (week 2373) Jul 7 00:15:08.187458 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:15:08.188597 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:15:08.188597 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:15:08.188597 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:15:08.188597 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listen normally on 3 eth0 172.31.30.121:123 Jul 7 00:15:08.187522 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:15:08.187723 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:15:08.187759 ntpd[1851]: Listen normally on 3 eth0 172.31.30.121:123 Jul 7 00:15:08.191508 ntpd[1851]: Listen normally on 4 lo [::1]:123 Jul 7 00:15:08.192013 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listen normally on 4 lo [::1]:123 Jul 7 00:15:08.192013 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: bind(21) AF_INET6 fe80::44c:58ff:fe3a:d8b7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:15:08.192013 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: unable to create socket on eth0 (5) for fe80::44c:58ff:fe3a:d8b7%2#123 Jul 7 00:15:08.192013 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: failed to init interface for address fe80::44c:58ff:fe3a:d8b7%2 Jul 7 00:15:08.192013 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Jul 7 00:15:08.191599 ntpd[1851]: bind(21) AF_INET6 fe80::44c:58ff:fe3a:d8b7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:15:08.191621 ntpd[1851]: unable to create socket on eth0 (5) for fe80::44c:58ff:fe3a:d8b7%2#123 Jul 7 00:15:08.191637 ntpd[1851]: failed to init interface for address fe80::44c:58ff:fe3a:d8b7%2 Jul 7 00:15:08.191681 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.196998481Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.476µs" Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197051629Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197079984Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197277900Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197299289Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197330323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197398578Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:15:08.198263 containerd[1908]: time="2025-07-07T00:15:08.197413566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:15:08.197802 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:15:08.202386 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:15:08.202542 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:15:08.204037 ntpd[1851]: 7 Jul 00:15:08 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:15:08.214848 containerd[1908]: time="2025-07-07T00:15:08.214758477Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215005 containerd[1908]: time="2025-07-07T00:15:08.214849789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215005 containerd[1908]: time="2025-07-07T00:15:08.214881238Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215005 containerd[1908]: time="2025-07-07T00:15:08.214894220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215186 containerd[1908]: time="2025-07-07T00:15:08.215049687Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215361 containerd[1908]: time="2025-07-07T00:15:08.215334417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215446 containerd[1908]: time="2025-07-07T00:15:08.215393174Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:15:08.215446 containerd[1908]: time="2025-07-07T00:15:08.215411005Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:15:08.215827 containerd[1908]: time="2025-07-07T00:15:08.215455698Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:15:08.218005 containerd[1908]: time="2025-07-07T00:15:08.217899002Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:15:08.218487 containerd[1908]: time="2025-07-07T00:15:08.218046301Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225452878Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225557473Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225581828Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225599348Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225621097Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225635186Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225661171Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225682507Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225699709Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225715917Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225730561Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:15:08.225812 containerd[1908]: time="2025-07-07T00:15:08.225750077Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.225945975Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.225978397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226004644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226023152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226039879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226057378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226073927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226088802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226107968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226127673Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226144062Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226234409Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:15:08.226280 containerd[1908]: time="2025-07-07T00:15:08.226253927Z" level=info msg="Start snapshots syncer" Jul 7 00:15:08.226691 containerd[1908]: time="2025-07-07T00:15:08.226284870Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:15:08.226727 containerd[1908]: time="2025-07-07T00:15:08.226670895Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:15:08.226907 containerd[1908]: time="2025-07-07T00:15:08.226740795Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:15:08.230106 containerd[1908]: time="2025-07-07T00:15:08.230036890Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230280254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230321730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230340097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230356984Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230375002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230390585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230406113Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230443471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230457764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:15:08.231095 containerd[1908]: time="2025-07-07T00:15:08.230474181Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:15:08.231872 containerd[1908]: time="2025-07-07T00:15:08.231836014Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:15:08.231969 containerd[1908]: time="2025-07-07T00:15:08.231947649Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:15:08.232015 containerd[1908]: time="2025-07-07T00:15:08.231971405Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:15:08.232015 containerd[1908]: time="2025-07-07T00:15:08.231988643Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:15:08.232015 containerd[1908]: time="2025-07-07T00:15:08.232002408Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:15:08.232115 containerd[1908]: time="2025-07-07T00:15:08.232018980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:15:08.232115 containerd[1908]: time="2025-07-07T00:15:08.232050065Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:15:08.232115 containerd[1908]: time="2025-07-07T00:15:08.232076056Z" level=info msg="runtime interface created" Jul 7 00:15:08.232115 containerd[1908]: time="2025-07-07T00:15:08.232083698Z" level=info msg="created NRI interface" Jul 7 00:15:08.232115 containerd[1908]: time="2025-07-07T00:15:08.232097368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:15:08.232269 containerd[1908]: time="2025-07-07T00:15:08.232119419Z" level=info msg="Connect containerd service" Jul 7 00:15:08.232269 containerd[1908]: time="2025-07-07T00:15:08.232185378Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:15:08.239583 containerd[1908]: time="2025-07-07T00:15:08.239374707Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:15:08.250022 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:15:08.254887 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:15:08.316460 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:15:08.317572 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:15:08.324966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:15:08.336098 systemd[1]: Started sshd@0-172.31.30.121:22-147.75.109.163:58682.service - OpenSSH per-connection server daemon (147.75.109.163:58682). Jul 7 00:15:08.343445 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:15:08.443374 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:15:08.448011 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:15:08.452534 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:15:08.454045 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:15:08.576142 systemd-networkd[1813]: eth0: Gained IPv6LL Jul 7 00:15:08.580855 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:15:08.583258 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:15:08.592706 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 00:15:08.598030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:08.601642 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:15:08.643917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:15:08.756814 containerd[1908]: time="2025-07-07T00:15:08.751943080Z" level=info msg="Start subscribing containerd event" Jul 7 00:15:08.756814 containerd[1908]: time="2025-07-07T00:15:08.756547938Z" level=info msg="Start recovering state" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.759013478Z" level=info msg="Start event monitor" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761422998Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761453964Z" level=info msg="Start streaming server" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761654298Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761671317Z" level=info msg="runtime interface starting up..." Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761682644Z" level=info msg="starting plugins..." Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761708057Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.761631653Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:15:08.762856 containerd[1908]: time="2025-07-07T00:15:08.762316000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:15:08.763816 containerd[1908]: time="2025-07-07T00:15:08.763403200Z" level=info msg="containerd successfully booted in 0.633815s" Jul 7 00:15:08.769570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:15:08.770444 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:15:08.776129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:15:08.790057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:15:08.853327 amazon-ssm-agent[2077]: Initializing new seelog logger Jul 7 00:15:08.854845 amazon-ssm-agent[2077]: New Seelog Logger Creation Complete Jul 7 00:15:08.854941 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.854941 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.855643 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 processing appconfig overrides Jul 7 00:15:08.855733 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.855733 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.856886 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 processing appconfig overrides Jul 7 00:15:08.860299 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.860299 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.860299 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 processing appconfig overrides Jul 7 00:15:08.860299 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8556 INFO Proxy environment variables: Jul 7 00:15:08.861104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:15:08.861943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:15:08.864377 sshd[2040]: Accepted publickey for core from 147.75.109.163 port 58682 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:08.865845 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.865845 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:08.865845 amazon-ssm-agent[2077]: 2025/07/07 00:15:08 processing appconfig overrides Jul 7 00:15:08.869081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:15:08.872844 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:08.881109 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service' requested by ':1.6' (uid=0 pid=2040 comm="sshd-session: core [priv] " label="system_u:system_r:kernel_t:s0") Jul 7 00:15:08.888326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:15:08.924416 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:15:08.930733 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:15:08.932940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:15:08.962059 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8556 INFO https_proxy: Jul 7 00:15:09.069223 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8556 INFO http_proxy: Jul 7 00:15:09.105618 systemd-logind[1861]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:15:09.106125 systemd-logind[1861]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 00:15:09.106154 systemd-logind[1861]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:15:09.106974 systemd-logind[1861]: New seat seat0. Jul 7 00:15:09.118345 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:15:09.120170 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.login1' Jul 7 00:15:09.131211 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:15:09.136273 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:15:09.167826 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8556 INFO no_proxy: Jul 7 00:15:09.179749 systemd-logind[1861]: New session 1 of user core. Jul 7 00:15:09.190163 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 00:15:09.194725 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 00:15:09.198374 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1924 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 00:15:09.212649 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 00:15:09.217479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:15:09.219978 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:15:09.227198 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:15:09.264580 (systemd)[2132]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:15:09.265307 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8568 INFO Checking if agent identity type OnPrem can be assumed Jul 7 00:15:09.276687 systemd-logind[1861]: New session c1 of user core. Jul 7 00:15:09.364394 amazon-ssm-agent[2077]: 2025-07-07 00:15:08.8577 INFO Checking if agent identity type EC2 can be assumed Jul 7 00:15:09.465503 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0273 INFO Agent will take identity from EC2 Jul 7 00:15:09.574328 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0323 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 7 00:15:09.588198 systemd[2132]: Queued start job for default target default.target. Jul 7 00:15:09.594227 systemd[2132]: Created slice app.slice - User Application Slice. Jul 7 00:15:09.594276 systemd[2132]: Reached target paths.target - Paths. Jul 7 00:15:09.594438 systemd[2132]: Reached target timers.target - Timers. Jul 7 00:15:09.600034 systemd[2132]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:15:09.626983 systemd[2132]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:15:09.629333 systemd[2132]: Reached target sockets.target - Sockets. Jul 7 00:15:09.629431 systemd[2132]: Reached target basic.target - Basic System. Jul 7 00:15:09.629492 systemd[2132]: Reached target default.target - Main User Target. Jul 7 00:15:09.629550 systemd[2132]: Startup finished in 329ms. Jul 7 00:15:09.629765 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:15:09.637992 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:15:09.671880 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0323 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 7 00:15:09.696951 tar[1876]: linux-amd64/README.md Jul 7 00:15:09.706657 amazon-ssm-agent[2077]: 2025/07/07 00:15:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:09.706657 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:15:09.708809 amazon-ssm-agent[2077]: 2025/07/07 00:15:09 processing appconfig overrides Jul 7 00:15:09.747374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:15:09.751601 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0323 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 00:15:09.751601 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0323 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0324 INFO [Registrar] Starting registrar module Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0387 INFO [EC2Identity] Checking disk for registration info Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0387 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.0388 INFO [EC2Identity] Generating registration keypair Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.6580 INFO [EC2Identity] Checking write access before registering Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.6585 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7062 INFO [EC2Identity] EC2 registration was successful. Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7063 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7064 INFO [CredentialRefresher] credentialRefresher has started Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7064 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7513 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 00:15:09.751762 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7515 INFO [CredentialRefresher] Credentials ready Jul 7 00:15:09.771677 amazon-ssm-agent[2077]: 2025-07-07 00:15:09.7517 INFO [CredentialRefresher] Next credential rotation will be in 29.999993484816667 minutes Jul 7 00:15:09.810538 systemd[1]: Started sshd@1-172.31.30.121:22-147.75.109.163:58692.service - OpenSSH per-connection server daemon (147.75.109.163:58692). Jul 7 00:15:09.811076 polkitd[2129]: Started polkitd version 126 Jul 7 00:15:09.823619 polkitd[2129]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 00:15:09.826627 polkitd[2129]: Loading rules from directory /run/polkit-1/rules.d Jul 7 00:15:09.828114 polkitd[2129]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:15:09.828571 polkitd[2129]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 00:15:09.828605 polkitd[2129]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:15:09.828655 polkitd[2129]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 00:15:09.850045 polkitd[2129]: Finished loading, compiling and executing 2 rules Jul 7 00:15:09.851961 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 00:15:09.854328 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 00:15:09.856186 polkitd[2129]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 00:15:09.896721 systemd-resolved[1762]: System hostname changed to 'ip-172-31-30-121'. Jul 7 00:15:09.897179 systemd-hostnamed[1924]: Hostname set to (transient) Jul 7 00:15:10.042045 sshd[2150]: Accepted publickey for core from 147.75.109.163 port 58692 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:10.043646 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:10.049461 systemd-logind[1861]: New session 2 of user core. Jul 7 00:15:10.054981 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:15:10.178839 sshd[2160]: Connection closed by 147.75.109.163 port 58692 Jul 7 00:15:10.180723 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:10.184760 systemd-logind[1861]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:15:10.186007 systemd[1]: sshd@1-172.31.30.121:22-147.75.109.163:58692.service: Deactivated successfully. Jul 7 00:15:10.187927 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:15:10.192151 systemd-logind[1861]: Removed session 2. Jul 7 00:15:10.213180 systemd[1]: Started sshd@2-172.31.30.121:22-147.75.109.163:58706.service - OpenSSH per-connection server daemon (147.75.109.163:58706). Jul 7 00:15:10.397219 sshd[2166]: Accepted publickey for core from 147.75.109.163 port 58706 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:10.399172 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:10.410111 systemd-logind[1861]: New session 3 of user core. Jul 7 00:15:10.414013 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:15:10.536775 sshd[2168]: Connection closed by 147.75.109.163 port 58706 Jul 7 00:15:10.539470 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:10.544413 systemd[1]: sshd@2-172.31.30.121:22-147.75.109.163:58706.service: Deactivated successfully. Jul 7 00:15:10.547588 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:15:10.550051 systemd-logind[1861]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:15:10.552830 systemd-logind[1861]: Removed session 3. Jul 7 00:15:10.764995 amazon-ssm-agent[2077]: 2025-07-07 00:15:10.7649 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 00:15:10.843988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:10.845429 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:15:10.847012 systemd[1]: Startup finished in 2.776s (kernel) + 9.310s (initrd) + 7.911s (userspace) = 19.998s. Jul 7 00:15:10.860368 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:15:10.870901 amazon-ssm-agent[2077]: 2025-07-07 00:15:10.7676 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2175) started Jul 7 00:15:10.972161 amazon-ssm-agent[2077]: 2025-07-07 00:15:10.7676 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 00:15:11.172989 ntpd[1851]: Listen normally on 6 eth0 [fe80::44c:58ff:fe3a:d8b7%2]:123 Jul 7 00:15:11.173450 ntpd[1851]: 7 Jul 00:15:11 ntpd[1851]: Listen normally on 6 eth0 [fe80::44c:58ff:fe3a:d8b7%2]:123 Jul 7 00:15:11.751517 kubelet[2185]: E0707 00:15:11.751435 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:15:11.756869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:15:11.757027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:15:11.757612 systemd[1]: kubelet.service: Consumed 1.098s CPU time, 264.3M memory peak. Jul 7 00:15:17.090624 systemd-resolved[1762]: Clock change detected. Flushing caches. Jul 7 00:15:22.487330 systemd[1]: Started sshd@3-172.31.30.121:22-147.75.109.163:46704.service - OpenSSH per-connection server daemon (147.75.109.163:46704). Jul 7 00:15:22.668247 sshd[2205]: Accepted publickey for core from 147.75.109.163 port 46704 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:22.669765 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:22.675273 systemd-logind[1861]: New session 4 of user core. Jul 7 00:15:22.686467 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:15:22.812236 sshd[2207]: Connection closed by 147.75.109.163 port 46704 Jul 7 00:15:22.813163 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:22.817060 systemd[1]: sshd@3-172.31.30.121:22-147.75.109.163:46704.service: Deactivated successfully. Jul 7 00:15:22.819764 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:15:22.822138 systemd-logind[1861]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:15:22.824308 systemd-logind[1861]: Removed session 4. Jul 7 00:15:22.848495 systemd[1]: Started sshd@4-172.31.30.121:22-147.75.109.163:46714.service - OpenSSH per-connection server daemon (147.75.109.163:46714). Jul 7 00:15:23.035870 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 46714 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:23.038133 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:23.045729 systemd-logind[1861]: New session 5 of user core. Jul 7 00:15:23.053480 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:15:23.169882 sshd[2215]: Connection closed by 147.75.109.163 port 46714 Jul 7 00:15:23.170690 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:23.175189 systemd[1]: sshd@4-172.31.30.121:22-147.75.109.163:46714.service: Deactivated successfully. Jul 7 00:15:23.175249 systemd-logind[1861]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:15:23.177137 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:15:23.179062 systemd-logind[1861]: Removed session 5. Jul 7 00:15:23.204483 systemd[1]: Started sshd@5-172.31.30.121:22-147.75.109.163:46722.service - OpenSSH per-connection server daemon (147.75.109.163:46722). Jul 7 00:15:23.387166 sshd[2221]: Accepted publickey for core from 147.75.109.163 port 46722 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:23.388694 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:23.395871 systemd-logind[1861]: New session 6 of user core. Jul 7 00:15:23.405482 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:15:23.526304 sshd[2223]: Connection closed by 147.75.109.163 port 46722 Jul 7 00:15:23.527276 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:23.530854 systemd[1]: sshd@5-172.31.30.121:22-147.75.109.163:46722.service: Deactivated successfully. Jul 7 00:15:23.532932 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:15:23.535588 systemd-logind[1861]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:15:23.537107 systemd-logind[1861]: Removed session 6. Jul 7 00:15:23.562631 systemd[1]: Started sshd@6-172.31.30.121:22-147.75.109.163:46730.service - OpenSSH per-connection server daemon (147.75.109.163:46730). Jul 7 00:15:23.699467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:15:23.702030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:23.729889 sshd[2229]: Accepted publickey for core from 147.75.109.163 port 46730 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:23.732399 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:23.741520 systemd-logind[1861]: New session 7 of user core. Jul 7 00:15:23.745441 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:15:23.862658 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:15:23.862951 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:23.880266 sudo[2235]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:23.903009 sshd[2234]: Connection closed by 147.75.109.163 port 46730 Jul 7 00:15:23.904517 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:23.910266 systemd-logind[1861]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:15:23.910552 systemd[1]: sshd@6-172.31.30.121:22-147.75.109.163:46730.service: Deactivated successfully. Jul 7 00:15:23.914355 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:15:23.918540 systemd-logind[1861]: Removed session 7. Jul 7 00:15:23.936030 systemd[1]: Started sshd@7-172.31.30.121:22-147.75.109.163:46740.service - OpenSSH per-connection server daemon (147.75.109.163:46740). Jul 7 00:15:23.983128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:23.994950 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:15:24.113467 kubelet[2248]: E0707 00:15:24.113277 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:15:24.118276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:15:24.118600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:15:24.119216 systemd[1]: kubelet.service: Consumed 200ms CPU time, 110M memory peak. Jul 7 00:15:24.142945 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 46740 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:24.144774 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:24.150696 systemd-logind[1861]: New session 8 of user core. Jul 7 00:15:24.160646 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:15:24.260279 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:15:24.260680 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:24.269422 sudo[2257]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:24.277611 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:15:24.278035 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:24.289855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:15:24.340119 augenrules[2279]: No rules Jul 7 00:15:24.341830 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:15:24.342118 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:15:24.344438 sudo[2256]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:24.367854 sshd[2255]: Connection closed by 147.75.109.163 port 46740 Jul 7 00:15:24.369263 sshd-session[2241]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:24.374829 systemd[1]: sshd@7-172.31.30.121:22-147.75.109.163:46740.service: Deactivated successfully. Jul 7 00:15:24.377269 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:15:24.378314 systemd-logind[1861]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:15:24.380055 systemd-logind[1861]: Removed session 8. Jul 7 00:15:24.406344 systemd[1]: Started sshd@8-172.31.30.121:22-147.75.109.163:46754.service - OpenSSH per-connection server daemon (147.75.109.163:46754). Jul 7 00:15:24.589560 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 46754 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:15:24.591124 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:24.597105 systemd-logind[1861]: New session 9 of user core. Jul 7 00:15:24.608487 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:15:24.706394 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:15:24.706816 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:25.154711 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:15:25.176721 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:15:25.471011 dockerd[2308]: time="2025-07-07T00:15:25.470817322Z" level=info msg="Starting up" Jul 7 00:15:25.472681 dockerd[2308]: time="2025-07-07T00:15:25.472612802Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:15:25.518042 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport526295687-merged.mount: Deactivated successfully. Jul 7 00:15:25.566473 dockerd[2308]: time="2025-07-07T00:15:25.566422761Z" level=info msg="Loading containers: start." Jul 7 00:15:25.580229 kernel: Initializing XFRM netlink socket Jul 7 00:15:25.824508 (udev-worker)[2329]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:15:25.878774 systemd-networkd[1813]: docker0: Link UP Jul 7 00:15:25.889532 dockerd[2308]: time="2025-07-07T00:15:25.889466024Z" level=info msg="Loading containers: done." Jul 7 00:15:25.912229 dockerd[2308]: time="2025-07-07T00:15:25.912154534Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:15:25.912430 dockerd[2308]: time="2025-07-07T00:15:25.912279440Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:15:25.912430 dockerd[2308]: time="2025-07-07T00:15:25.912420560Z" level=info msg="Initializing buildkit" Jul 7 00:15:25.956505 dockerd[2308]: time="2025-07-07T00:15:25.956415419Z" level=info msg="Completed buildkit initialization" Jul 7 00:15:25.964193 dockerd[2308]: time="2025-07-07T00:15:25.963999327Z" level=info msg="Daemon has completed initialization" Jul 7 00:15:25.964193 dockerd[2308]: time="2025-07-07T00:15:25.964092843Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:15:25.964352 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:15:27.036593 containerd[1908]: time="2025-07-07T00:15:27.036533440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:15:27.683287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451288246.mount: Deactivated successfully. Jul 7 00:15:29.107894 containerd[1908]: time="2025-07-07T00:15:29.107835160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:29.108851 containerd[1908]: time="2025-07-07T00:15:29.108744444Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 00:15:29.110246 containerd[1908]: time="2025-07-07T00:15:29.110164500Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:29.113229 containerd[1908]: time="2025-07-07T00:15:29.112846731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:29.113914 containerd[1908]: time="2025-07-07T00:15:29.113875063Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.077292675s" Jul 7 00:15:29.114009 containerd[1908]: time="2025-07-07T00:15:29.113924934Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:15:29.114789 containerd[1908]: time="2025-07-07T00:15:29.114745817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:15:30.756719 containerd[1908]: time="2025-07-07T00:15:30.756650100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:30.758548 containerd[1908]: time="2025-07-07T00:15:30.758309091Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 00:15:30.760639 containerd[1908]: time="2025-07-07T00:15:30.760588740Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:30.764484 containerd[1908]: time="2025-07-07T00:15:30.764442663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:30.765698 containerd[1908]: time="2025-07-07T00:15:30.765665183Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.650768291s" Jul 7 00:15:30.765814 containerd[1908]: time="2025-07-07T00:15:30.765800062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:15:30.766290 containerd[1908]: time="2025-07-07T00:15:30.766260469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:15:32.301537 containerd[1908]: time="2025-07-07T00:15:32.301463792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:32.303785 containerd[1908]: time="2025-07-07T00:15:32.303556908Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 00:15:32.306188 containerd[1908]: time="2025-07-07T00:15:32.306141241Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:32.310305 containerd[1908]: time="2025-07-07T00:15:32.310262626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:32.311152 containerd[1908]: time="2025-07-07T00:15:32.311121350Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.544737077s" Jul 7 00:15:32.311285 containerd[1908]: time="2025-07-07T00:15:32.311271011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:15:32.312057 containerd[1908]: time="2025-07-07T00:15:32.312027366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:15:33.401088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535918720.mount: Deactivated successfully. Jul 7 00:15:33.986931 containerd[1908]: time="2025-07-07T00:15:33.986818840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:33.989080 containerd[1908]: time="2025-07-07T00:15:33.988883797Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 00:15:33.991316 containerd[1908]: time="2025-07-07T00:15:33.991269701Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:33.994701 containerd[1908]: time="2025-07-07T00:15:33.994625350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:33.995362 containerd[1908]: time="2025-07-07T00:15:33.995229678Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.683172797s" Jul 7 00:15:33.995362 containerd[1908]: time="2025-07-07T00:15:33.995260675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:15:33.995769 containerd[1908]: time="2025-07-07T00:15:33.995743681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:15:34.369221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:15:34.371770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:34.564932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820046189.mount: Deactivated successfully. Jul 7 00:15:34.649965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:34.661680 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:15:34.720241 kubelet[2594]: E0707 00:15:34.719933 2594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:15:34.722956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:15:34.723107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:15:34.724187 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.4M memory peak. Jul 7 00:15:35.811767 containerd[1908]: time="2025-07-07T00:15:35.811676676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:35.812800 containerd[1908]: time="2025-07-07T00:15:35.812619188Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:15:35.814767 containerd[1908]: time="2025-07-07T00:15:35.814723779Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:35.826115 containerd[1908]: time="2025-07-07T00:15:35.826050718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:35.830296 containerd[1908]: time="2025-07-07T00:15:35.830222705Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.834421693s" Jul 7 00:15:35.830296 containerd[1908]: time="2025-07-07T00:15:35.830284069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:15:35.832646 containerd[1908]: time="2025-07-07T00:15:35.832594970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:15:36.290122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871096686.mount: Deactivated successfully. Jul 7 00:15:36.297462 containerd[1908]: time="2025-07-07T00:15:36.297400176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:36.298241 containerd[1908]: time="2025-07-07T00:15:36.298083662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:15:36.299318 containerd[1908]: time="2025-07-07T00:15:36.299258824Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:36.301515 containerd[1908]: time="2025-07-07T00:15:36.301456305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:36.302572 containerd[1908]: time="2025-07-07T00:15:36.302138006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.496484ms" Jul 7 00:15:36.302572 containerd[1908]: time="2025-07-07T00:15:36.302179051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:15:36.303023 containerd[1908]: time="2025-07-07T00:15:36.302992420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:15:36.767006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296724061.mount: Deactivated successfully. Jul 7 00:15:38.678419 containerd[1908]: time="2025-07-07T00:15:38.678233884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:38.680239 containerd[1908]: time="2025-07-07T00:15:38.680120423Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 00:15:38.682609 containerd[1908]: time="2025-07-07T00:15:38.682545829Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:38.688259 containerd[1908]: time="2025-07-07T00:15:38.688180286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:38.689741 containerd[1908]: time="2025-07-07T00:15:38.689680957Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.386654117s" Jul 7 00:15:38.689741 containerd[1908]: time="2025-07-07T00:15:38.689726936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:15:41.563959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:41.564363 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.4M memory peak. Jul 7 00:15:41.567436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:41.603112 systemd[1]: Reload requested from client PID 2732 ('systemctl') (unit session-9.scope)... Jul 7 00:15:41.603131 systemd[1]: Reloading... Jul 7 00:15:41.752234 zram_generator::config[2773]: No configuration found. Jul 7 00:15:41.920362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:42.058572 systemd[1]: Reloading finished in 453 ms. Jul 7 00:15:42.087614 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 00:15:42.137617 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:15:42.137737 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:15:42.138165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:42.138248 systemd[1]: kubelet.service: Consumed 151ms CPU time, 97.8M memory peak. Jul 7 00:15:42.141706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:42.379450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:42.387794 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:15:42.439134 kubelet[2844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:42.439602 kubelet[2844]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:15:42.439663 kubelet[2844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:42.440172 kubelet[2844]: I0707 00:15:42.440128 2844 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:15:42.696459 kubelet[2844]: I0707 00:15:42.696153 2844 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:15:42.696459 kubelet[2844]: I0707 00:15:42.696391 2844 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:15:42.697171 kubelet[2844]: I0707 00:15:42.697140 2844 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:15:42.753909 kubelet[2844]: E0707 00:15:42.753839 2844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:42.756539 kubelet[2844]: I0707 00:15:42.756494 2844 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:15:42.771373 kubelet[2844]: I0707 00:15:42.771340 2844 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:15:42.778619 kubelet[2844]: I0707 00:15:42.778584 2844 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:15:42.782217 kubelet[2844]: I0707 00:15:42.782118 2844 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:15:42.782385 kubelet[2844]: I0707 00:15:42.782178 2844 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:15:42.785567 kubelet[2844]: I0707 00:15:42.785523 2844 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:15:42.785567 kubelet[2844]: I0707 00:15:42.785558 2844 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:15:42.788359 kubelet[2844]: I0707 00:15:42.788325 2844 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:42.797099 kubelet[2844]: I0707 00:15:42.796537 2844 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:15:42.797099 kubelet[2844]: I0707 00:15:42.796602 2844 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:15:42.800583 kubelet[2844]: I0707 00:15:42.800535 2844 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:15:42.800583 kubelet[2844]: I0707 00:15:42.800578 2844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:15:42.807485 kubelet[2844]: W0707 00:15:42.806671 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-121&limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:42.807485 kubelet[2844]: E0707 00:15:42.806733 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-121&limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:42.807485 kubelet[2844]: W0707 00:15:42.807148 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:42.807694 kubelet[2844]: E0707 00:15:42.807187 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:42.813245 kubelet[2844]: I0707 00:15:42.812871 2844 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:15:42.817024 kubelet[2844]: I0707 00:15:42.816994 2844 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:15:42.817223 kubelet[2844]: W0707 00:15:42.817211 2844 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:15:42.820229 kubelet[2844]: I0707 00:15:42.820016 2844 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:15:42.820229 kubelet[2844]: I0707 00:15:42.820054 2844 server.go:1287] "Started kubelet" Jul 7 00:15:42.842139 kubelet[2844]: I0707 00:15:42.841986 2844 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:15:42.846186 kubelet[2844]: I0707 00:15:42.846020 2844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:15:42.846671 kubelet[2844]: I0707 00:15:42.846649 2844 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:15:42.848485 kubelet[2844]: I0707 00:15:42.848409 2844 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:15:42.848852 kubelet[2844]: I0707 00:15:42.848816 2844 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:15:42.855032 kubelet[2844]: E0707 00:15:42.850796 2844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-121.184fcfe0697ffef5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-121,UID:ip-172-31-30-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-121,},FirstTimestamp:2025-07-07 00:15:42.820032245 +0000 UTC m=+0.428076089,LastTimestamp:2025-07-07 00:15:42.820032245 +0000 UTC m=+0.428076089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-121,}" Jul 7 00:15:42.855615 kubelet[2844]: I0707 00:15:42.855433 2844 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:15:42.858026 kubelet[2844]: E0707 00:15:42.857873 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-121\" not found" Jul 7 00:15:42.859936 kubelet[2844]: I0707 00:15:42.857966 2844 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:15:42.859936 kubelet[2844]: I0707 00:15:42.858691 2844 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:15:42.859936 kubelet[2844]: I0707 00:15:42.858779 2844 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:15:42.859936 kubelet[2844]: W0707 00:15:42.859114 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:42.859936 kubelet[2844]: E0707 00:15:42.859155 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:42.859936 kubelet[2844]: E0707 00:15:42.859247 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": dial tcp 172.31.30.121:6443: connect: connection refused" interval="200ms" Jul 7 00:15:42.862582 kubelet[2844]: I0707 00:15:42.862480 2844 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:15:42.862675 kubelet[2844]: I0707 00:15:42.862644 2844 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:15:42.879350 kubelet[2844]: I0707 00:15:42.879171 2844 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:15:42.883719 kubelet[2844]: I0707 00:15:42.883650 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:15:42.887890 kubelet[2844]: I0707 00:15:42.887857 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:15:42.888055 kubelet[2844]: I0707 00:15:42.888045 2844 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:15:42.888168 kubelet[2844]: I0707 00:15:42.888157 2844 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:15:42.888254 kubelet[2844]: I0707 00:15:42.888244 2844 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:15:42.888433 kubelet[2844]: E0707 00:15:42.888379 2844 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:15:42.898784 kubelet[2844]: E0707 00:15:42.898750 2844 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:15:42.902095 kubelet[2844]: W0707 00:15:42.902032 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:42.903171 kubelet[2844]: E0707 00:15:42.902104 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:42.922055 kubelet[2844]: I0707 00:15:42.921064 2844 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:15:42.922055 kubelet[2844]: I0707 00:15:42.921078 2844 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:15:42.922055 kubelet[2844]: I0707 00:15:42.921095 2844 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:42.949015 kubelet[2844]: I0707 00:15:42.948918 2844 policy_none.go:49] "None policy: Start" Jul 7 00:15:42.949015 kubelet[2844]: I0707 00:15:42.948967 2844 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:15:42.949015 kubelet[2844]: I0707 00:15:42.948981 2844 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:15:42.958415 kubelet[2844]: E0707 00:15:42.958251 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-121\" not found" Jul 7 00:15:42.976032 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:15:42.990326 kubelet[2844]: E0707 00:15:42.990071 2844 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:15:42.996217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:15:43.003893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:15:43.019341 kubelet[2844]: I0707 00:15:43.019310 2844 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:15:43.019535 kubelet[2844]: I0707 00:15:43.019521 2844 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:15:43.019589 kubelet[2844]: I0707 00:15:43.019545 2844 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:15:43.020088 kubelet[2844]: I0707 00:15:43.020053 2844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:15:43.021777 kubelet[2844]: E0707 00:15:43.021751 2844 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:15:43.022363 kubelet[2844]: E0707 00:15:43.022284 2844 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-121\" not found" Jul 7 00:15:43.060158 kubelet[2844]: E0707 00:15:43.060111 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": dial tcp 172.31.30.121:6443: connect: connection refused" interval="400ms" Jul 7 00:15:43.124352 kubelet[2844]: I0707 00:15:43.124307 2844 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:43.125168 kubelet[2844]: E0707 00:15:43.124864 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.121:6443/api/v1/nodes\": dial tcp 172.31.30.121:6443: connect: connection refused" node="ip-172-31-30-121" Jul 7 00:15:43.207954 systemd[1]: Created slice kubepods-burstable-podd4e8724944771c4b08ebd706241225b7.slice - libcontainer container kubepods-burstable-podd4e8724944771c4b08ebd706241225b7.slice. Jul 7 00:15:43.223779 kubelet[2844]: E0707 00:15:43.223743 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:43.228417 systemd[1]: Created slice kubepods-burstable-pod4b7ee575df41817ff721c0568441e6fd.slice - libcontainer container kubepods-burstable-pod4b7ee575df41817ff721c0568441e6fd.slice. Jul 7 00:15:43.237631 kubelet[2844]: E0707 00:15:43.237582 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:43.241475 systemd[1]: Created slice kubepods-burstable-pod5e5121fc8033e0298487c8175c944dea.slice - libcontainer container kubepods-burstable-pod5e5121fc8033e0298487c8175c944dea.slice. Jul 7 00:15:43.243903 kubelet[2844]: E0707 00:15:43.243869 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:43.260850 kubelet[2844]: I0707 00:15:43.260343 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e5121fc8033e0298487c8175c944dea-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-121\" (UID: \"5e5121fc8033e0298487c8175c944dea\") " pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:43.260850 kubelet[2844]: I0707 00:15:43.260388 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:43.260850 kubelet[2844]: I0707 00:15:43.260409 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:43.260850 kubelet[2844]: I0707 00:15:43.260432 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:43.260850 kubelet[2844]: I0707 00:15:43.260451 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:43.261248 kubelet[2844]: I0707 00:15:43.260467 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:43.261248 kubelet[2844]: I0707 00:15:43.260483 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:43.261248 kubelet[2844]: I0707 00:15:43.260501 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:43.261248 kubelet[2844]: I0707 00:15:43.260536 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:43.327634 kubelet[2844]: I0707 00:15:43.327600 2844 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:43.328014 kubelet[2844]: E0707 00:15:43.327983 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.121:6443/api/v1/nodes\": dial tcp 172.31.30.121:6443: connect: connection refused" node="ip-172-31-30-121" Jul 7 00:15:43.461640 kubelet[2844]: E0707 00:15:43.461470 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": dial tcp 172.31.30.121:6443: connect: connection refused" interval="800ms" Jul 7 00:15:43.525866 containerd[1908]: time="2025-07-07T00:15:43.525790768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-121,Uid:d4e8724944771c4b08ebd706241225b7,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:43.538514 containerd[1908]: time="2025-07-07T00:15:43.538467058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-121,Uid:4b7ee575df41817ff721c0568441e6fd,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:43.545637 containerd[1908]: time="2025-07-07T00:15:43.545592783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-121,Uid:5e5121fc8033e0298487c8175c944dea,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:43.665658 kubelet[2844]: E0707 00:15:43.665547 2844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-121.184fcfe0697ffef5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-121,UID:ip-172-31-30-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-121,},FirstTimestamp:2025-07-07 00:15:42.820032245 +0000 UTC m=+0.428076089,LastTimestamp:2025-07-07 00:15:42.820032245 +0000 UTC m=+0.428076089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-121,}" Jul 7 00:15:43.706070 containerd[1908]: time="2025-07-07T00:15:43.702371140Z" level=info msg="connecting to shim ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683" address="unix:///run/containerd/s/159f83430504167a9bdef6d3cf44052b31be404112a269e4493b94a8f89b306e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:43.706988 containerd[1908]: time="2025-07-07T00:15:43.706915444Z" level=info msg="connecting to shim e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a" address="unix:///run/containerd/s/09de36eed75ac9a6280ffdc71906afe78328ab96f610eb4573d58b6337b4a42c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:43.708364 kubelet[2844]: W0707 00:15:43.708026 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-121&limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:43.708666 kubelet[2844]: E0707 00:15:43.708610 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-121&limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:43.713517 containerd[1908]: time="2025-07-07T00:15:43.713381993Z" level=info msg="connecting to shim c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b" address="unix:///run/containerd/s/f5ff0195ce43d81a27898884ee08b3538e20fb13f2c3bc1ff8710d2117615a10" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:43.735827 kubelet[2844]: I0707 00:15:43.732824 2844 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:43.737223 kubelet[2844]: E0707 00:15:43.737149 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.121:6443/api/v1/nodes\": dial tcp 172.31.30.121:6443: connect: connection refused" node="ip-172-31-30-121" Jul 7 00:15:43.860472 systemd[1]: Started cri-containerd-c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b.scope - libcontainer container c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b. Jul 7 00:15:43.862331 systemd[1]: Started cri-containerd-ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683.scope - libcontainer container ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683. Jul 7 00:15:43.865383 systemd[1]: Started cri-containerd-e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a.scope - libcontainer container e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a. Jul 7 00:15:43.890942 kubelet[2844]: W0707 00:15:43.890193 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:43.891131 kubelet[2844]: E0707 00:15:43.890981 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:43.984148 kubelet[2844]: W0707 00:15:43.983665 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:43.984737 kubelet[2844]: E0707 00:15:43.984002 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:43.994553 containerd[1908]: time="2025-07-07T00:15:43.993833263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-121,Uid:5e5121fc8033e0298487c8175c944dea,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b\"" Jul 7 00:15:44.008888 containerd[1908]: time="2025-07-07T00:15:44.008839659Z" level=info msg="CreateContainer within sandbox \"c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:15:44.033180 containerd[1908]: time="2025-07-07T00:15:44.032944582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-121,Uid:4b7ee575df41817ff721c0568441e6fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a\"" Jul 7 00:15:44.040433 containerd[1908]: time="2025-07-07T00:15:44.040140234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-121,Uid:d4e8724944771c4b08ebd706241225b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683\"" Jul 7 00:15:44.041945 containerd[1908]: time="2025-07-07T00:15:44.041905062Z" level=info msg="CreateContainer within sandbox \"e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:15:44.042414 containerd[1908]: time="2025-07-07T00:15:44.042370674Z" level=info msg="Container 3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:44.045488 containerd[1908]: time="2025-07-07T00:15:44.045383010Z" level=info msg="CreateContainer within sandbox \"ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:15:44.064343 containerd[1908]: time="2025-07-07T00:15:44.064282808Z" level=info msg="Container 90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:44.070164 containerd[1908]: time="2025-07-07T00:15:44.070120257Z" level=info msg="CreateContainer within sandbox \"c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\"" Jul 7 00:15:44.070945 containerd[1908]: time="2025-07-07T00:15:44.070895692Z" level=info msg="StartContainer for \"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\"" Jul 7 00:15:44.073039 containerd[1908]: time="2025-07-07T00:15:44.072969666Z" level=info msg="connecting to shim 3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406" address="unix:///run/containerd/s/f5ff0195ce43d81a27898884ee08b3538e20fb13f2c3bc1ff8710d2117615a10" protocol=ttrpc version=3 Jul 7 00:15:44.079514 containerd[1908]: time="2025-07-07T00:15:44.079440795Z" level=info msg="CreateContainer within sandbox \"e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\"" Jul 7 00:15:44.080812 containerd[1908]: time="2025-07-07T00:15:44.080399281Z" level=info msg="StartContainer for \"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\"" Jul 7 00:15:44.082224 containerd[1908]: time="2025-07-07T00:15:44.081623430Z" level=info msg="Container 783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:44.085613 containerd[1908]: time="2025-07-07T00:15:44.085559247Z" level=info msg="connecting to shim 90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d" address="unix:///run/containerd/s/09de36eed75ac9a6280ffdc71906afe78328ab96f610eb4573d58b6337b4a42c" protocol=ttrpc version=3 Jul 7 00:15:44.100556 containerd[1908]: time="2025-07-07T00:15:44.100340265Z" level=info msg="CreateContainer within sandbox \"ce40969ee7b9cb2f668f1091b397839cf0a54a0268ec5e41d200735980335683\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0\"" Jul 7 00:15:44.101911 containerd[1908]: time="2025-07-07T00:15:44.101868644Z" level=info msg="StartContainer for \"783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0\"" Jul 7 00:15:44.104103 containerd[1908]: time="2025-07-07T00:15:44.104057834Z" level=info msg="connecting to shim 783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0" address="unix:///run/containerd/s/159f83430504167a9bdef6d3cf44052b31be404112a269e4493b94a8f89b306e" protocol=ttrpc version=3 Jul 7 00:15:44.105892 systemd[1]: Started cri-containerd-3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406.scope - libcontainer container 3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406. Jul 7 00:15:44.127766 systemd[1]: Started cri-containerd-90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d.scope - libcontainer container 90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d. Jul 7 00:15:44.138886 systemd[1]: Started cri-containerd-783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0.scope - libcontainer container 783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0. Jul 7 00:15:44.255167 containerd[1908]: time="2025-07-07T00:15:44.254527818Z" level=info msg="StartContainer for \"783bb0d2ce502f1e831243ded72956ec762a6da21f6d17d5546a200326bca6a0\" returns successfully" Jul 7 00:15:44.263965 kubelet[2844]: E0707 00:15:44.263918 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": dial tcp 172.31.30.121:6443: connect: connection refused" interval="1.6s" Jul 7 00:15:44.266668 containerd[1908]: time="2025-07-07T00:15:44.266338115Z" level=info msg="StartContainer for \"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\" returns successfully" Jul 7 00:15:44.283226 containerd[1908]: time="2025-07-07T00:15:44.283163310Z" level=info msg="StartContainer for \"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\" returns successfully" Jul 7 00:15:44.378535 kubelet[2844]: W0707 00:15:44.378385 2844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.121:6443: connect: connection refused Jul 7 00:15:44.378535 kubelet[2844]: E0707 00:15:44.378491 2844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:44.540025 kubelet[2844]: I0707 00:15:44.539759 2844 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:44.540899 kubelet[2844]: E0707 00:15:44.540858 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.121:6443/api/v1/nodes\": dial tcp 172.31.30.121:6443: connect: connection refused" node="ip-172-31-30-121" Jul 7 00:15:44.896406 kubelet[2844]: E0707 00:15:44.896349 2844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.121:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:44.938027 kubelet[2844]: E0707 00:15:44.937994 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:44.943297 kubelet[2844]: E0707 00:15:44.942891 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:44.946227 kubelet[2844]: E0707 00:15:44.945942 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:45.947641 kubelet[2844]: E0707 00:15:45.947601 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:45.948618 kubelet[2844]: E0707 00:15:45.948580 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:46.144998 kubelet[2844]: I0707 00:15:46.144967 2844 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:46.948897 kubelet[2844]: E0707 00:15:46.948859 2844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:46.966497 kubelet[2844]: E0707 00:15:46.966458 2844 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-121\" not found" node="ip-172-31-30-121" Jul 7 00:15:47.058240 kubelet[2844]: I0707 00:15:47.058181 2844 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-121" Jul 7 00:15:47.058420 kubelet[2844]: E0707 00:15:47.058250 2844 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-121\": node \"ip-172-31-30-121\" not found" Jul 7 00:15:47.159679 kubelet[2844]: I0707 00:15:47.159148 2844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:47.166813 kubelet[2844]: E0707 00:15:47.166774 2844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:47.167038 kubelet[2844]: I0707 00:15:47.167010 2844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:47.170230 kubelet[2844]: E0707 00:15:47.169410 2844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:47.170230 kubelet[2844]: I0707 00:15:47.169442 2844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:47.171464 kubelet[2844]: E0707 00:15:47.171428 2844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:47.815740 kubelet[2844]: I0707 00:15:47.815403 2844 apiserver.go:52] "Watching apiserver" Jul 7 00:15:47.859857 kubelet[2844]: I0707 00:15:47.859815 2844 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:15:49.378214 systemd[1]: Reload requested from client PID 3117 ('systemctl') (unit session-9.scope)... Jul 7 00:15:49.378248 systemd[1]: Reloading... Jul 7 00:15:49.552232 zram_generator::config[3158]: No configuration found. Jul 7 00:15:49.716963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:49.889478 systemd[1]: Reloading finished in 510 ms. Jul 7 00:15:49.930549 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:49.945196 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:15:49.945780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:49.945872 systemd[1]: kubelet.service: Consumed 887ms CPU time, 128.7M memory peak. Jul 7 00:15:49.951501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:50.292329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:50.306149 (kubelet)[3220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:15:50.394884 kubelet[3220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:50.394884 kubelet[3220]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:15:50.394884 kubelet[3220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:50.395728 kubelet[3220]: I0707 00:15:50.395009 3220 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:15:50.415280 kubelet[3220]: I0707 00:15:50.414554 3220 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:15:50.415280 kubelet[3220]: I0707 00:15:50.414593 3220 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:15:50.416149 kubelet[3220]: I0707 00:15:50.415865 3220 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:15:50.418132 kubelet[3220]: I0707 00:15:50.418104 3220 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:15:50.430549 kubelet[3220]: I0707 00:15:50.430510 3220 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:15:50.440969 sudo[3235]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:15:50.441329 sudo[3235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:15:50.453458 kubelet[3220]: I0707 00:15:50.453346 3220 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:15:50.458499 kubelet[3220]: I0707 00:15:50.458457 3220 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:15:50.458872 kubelet[3220]: I0707 00:15:50.458754 3220 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:15:50.459186 kubelet[3220]: I0707 00:15:50.458791 3220 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:15:50.459186 kubelet[3220]: I0707 00:15:50.459050 3220 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:15:50.459186 kubelet[3220]: I0707 00:15:50.459063 3220 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:15:50.459186 kubelet[3220]: I0707 00:15:50.459123 3220 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:50.460188 kubelet[3220]: I0707 00:15:50.459334 3220 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:15:50.460188 kubelet[3220]: I0707 00:15:50.459363 3220 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:15:50.460188 kubelet[3220]: I0707 00:15:50.459663 3220 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:15:50.460188 kubelet[3220]: I0707 00:15:50.459682 3220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:15:50.465256 kubelet[3220]: I0707 00:15:50.464044 3220 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:15:50.465718 kubelet[3220]: I0707 00:15:50.465600 3220 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:15:50.468115 kubelet[3220]: I0707 00:15:50.467596 3220 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:15:50.468115 kubelet[3220]: I0707 00:15:50.467753 3220 server.go:1287] "Started kubelet" Jul 7 00:15:50.486897 kubelet[3220]: I0707 00:15:50.486862 3220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:15:50.500071 kubelet[3220]: I0707 00:15:50.499962 3220 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:15:50.501022 kubelet[3220]: I0707 00:15:50.500948 3220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:15:50.501914 kubelet[3220]: I0707 00:15:50.501372 3220 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:15:50.501914 kubelet[3220]: I0707 00:15:50.501702 3220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:15:50.508226 kubelet[3220]: I0707 00:15:50.505381 3220 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:15:50.508226 kubelet[3220]: E0707 00:15:50.505680 3220 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-121\" not found" Jul 7 00:15:50.508226 kubelet[3220]: I0707 00:15:50.508049 3220 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:15:50.508226 kubelet[3220]: I0707 00:15:50.508191 3220 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:15:50.540223 kubelet[3220]: I0707 00:15:50.539944 3220 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:15:50.540223 kubelet[3220]: I0707 00:15:50.540065 3220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:15:50.548628 kubelet[3220]: I0707 00:15:50.548456 3220 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:15:50.554751 kubelet[3220]: I0707 00:15:50.554672 3220 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:15:50.568041 kubelet[3220]: I0707 00:15:50.567956 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:15:50.569876 kubelet[3220]: I0707 00:15:50.569811 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:15:50.569876 kubelet[3220]: I0707 00:15:50.569857 3220 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:15:50.569876 kubelet[3220]: I0707 00:15:50.569883 3220 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:15:50.570091 kubelet[3220]: I0707 00:15:50.569892 3220 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:15:50.570091 kubelet[3220]: E0707 00:15:50.569948 3220 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:15:50.667834 kubelet[3220]: I0707 00:15:50.667801 3220 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:15:50.667834 kubelet[3220]: I0707 00:15:50.667827 3220 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:15:50.668025 kubelet[3220]: I0707 00:15:50.667848 3220 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668127 3220 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668145 3220 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668169 3220 policy_none.go:49] "None policy: Start" Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668183 3220 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668196 3220 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:15:50.668815 kubelet[3220]: I0707 00:15:50.668346 3220 state_mem.go:75] "Updated machine memory state" Jul 7 00:15:50.670626 kubelet[3220]: E0707 00:15:50.670403 3220 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:15:50.675216 kubelet[3220]: I0707 00:15:50.675082 3220 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:15:50.676013 kubelet[3220]: I0707 00:15:50.675795 3220 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:15:50.676013 kubelet[3220]: I0707 00:15:50.675814 3220 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:15:50.676983 kubelet[3220]: I0707 00:15:50.676937 3220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:15:50.688223 kubelet[3220]: E0707 00:15:50.687332 3220 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:15:50.799055 kubelet[3220]: I0707 00:15:50.798949 3220 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-121" Jul 7 00:15:50.814504 kubelet[3220]: I0707 00:15:50.814222 3220 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-121" Jul 7 00:15:50.814504 kubelet[3220]: I0707 00:15:50.814315 3220 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-121" Jul 7 00:15:50.876027 kubelet[3220]: I0707 00:15:50.875308 3220 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:50.876027 kubelet[3220]: I0707 00:15:50.875862 3220 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:50.876245 kubelet[3220]: I0707 00:15:50.876122 3220 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:50.911836 kubelet[3220]: I0707 00:15:50.911654 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:50.912121 kubelet[3220]: I0707 00:15:50.911989 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:50.912402 kubelet[3220]: I0707 00:15:50.912286 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:50.912402 kubelet[3220]: I0707 00:15:50.912364 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e5121fc8033e0298487c8175c944dea-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-121\" (UID: \"5e5121fc8033e0298487c8175c944dea\") " pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:50.912727 kubelet[3220]: I0707 00:15:50.912672 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:50.912941 kubelet[3220]: I0707 00:15:50.912832 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4e8724944771c4b08ebd706241225b7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-121\" (UID: \"d4e8724944771c4b08ebd706241225b7\") " pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:50.913129 kubelet[3220]: I0707 00:15:50.912863 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:50.913129 kubelet[3220]: I0707 00:15:50.913052 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:50.913325 kubelet[3220]: I0707 00:15:50.913291 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b7ee575df41817ff721c0568441e6fd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-121\" (UID: \"4b7ee575df41817ff721c0568441e6fd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-121" Jul 7 00:15:51.206266 sudo[3235]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:51.463247 kubelet[3220]: I0707 00:15:51.462421 3220 apiserver.go:52] "Watching apiserver" Jul 7 00:15:51.508746 kubelet[3220]: I0707 00:15:51.508694 3220 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:15:51.617738 kubelet[3220]: I0707 00:15:51.617703 3220 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:51.619874 kubelet[3220]: I0707 00:15:51.619840 3220 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:51.635185 kubelet[3220]: E0707 00:15:51.635137 3220 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-121\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-121" Jul 7 00:15:51.636769 kubelet[3220]: E0707 00:15:51.636731 3220 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-121\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-121" Jul 7 00:15:51.685210 kubelet[3220]: I0707 00:15:51.685124 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-121" podStartSLOduration=1.685100626 podStartE2EDuration="1.685100626s" podCreationTimestamp="2025-07-07 00:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:51.667792373 +0000 UTC m=+1.354648108" watchObservedRunningTime="2025-07-07 00:15:51.685100626 +0000 UTC m=+1.371956361" Jul 7 00:15:51.685431 kubelet[3220]: I0707 00:15:51.685273 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-121" podStartSLOduration=1.6852665789999999 podStartE2EDuration="1.685266579s" podCreationTimestamp="2025-07-07 00:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:51.683859866 +0000 UTC m=+1.370715599" watchObservedRunningTime="2025-07-07 00:15:51.685266579 +0000 UTC m=+1.372122313" Jul 7 00:15:51.719873 kubelet[3220]: I0707 00:15:51.719356 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-121" podStartSLOduration=1.719332939 podStartE2EDuration="1.719332939s" podCreationTimestamp="2025-07-07 00:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:51.702422362 +0000 UTC m=+1.389278097" watchObservedRunningTime="2025-07-07 00:15:51.719332939 +0000 UTC m=+1.406188674" Jul 7 00:15:53.142869 sudo[2291]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:53.165311 sshd[2290]: Connection closed by 147.75.109.163 port 46754 Jul 7 00:15:53.166366 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:53.171802 systemd-logind[1861]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:15:53.172730 systemd[1]: sshd@8-172.31.30.121:22-147.75.109.163:46754.service: Deactivated successfully. Jul 7 00:15:53.175550 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:15:53.175776 systemd[1]: session-9.scope: Consumed 5.098s CPU time, 207.4M memory peak. Jul 7 00:15:53.177886 systemd-logind[1861]: Removed session 9. Jul 7 00:15:54.061387 kubelet[3220]: I0707 00:15:54.061350 3220 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:15:54.062278 containerd[1908]: time="2025-07-07T00:15:54.061969061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:15:54.063374 kubelet[3220]: I0707 00:15:54.062378 3220 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:15:54.764948 update_engine[1864]: I20250707 00:15:54.764873 1864 update_attempter.cc:509] Updating boot flags... Jul 7 00:15:55.225840 systemd[1]: Created slice kubepods-besteffort-pod7fa5c683_4144_43e0_bd5c_08069ec0b636.slice - libcontainer container kubepods-besteffort-pod7fa5c683_4144_43e0_bd5c_08069ec0b636.slice. Jul 7 00:15:55.246545 kubelet[3220]: I0707 00:15:55.245878 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rx8\" (UniqueName: \"kubernetes.io/projected/7fa5c683-4144-43e0-bd5c-08069ec0b636-kube-api-access-62rx8\") pod \"kube-proxy-th9vn\" (UID: \"7fa5c683-4144-43e0-bd5c-08069ec0b636\") " pod="kube-system/kube-proxy-th9vn" Jul 7 00:15:55.246545 kubelet[3220]: I0707 00:15:55.246175 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fa5c683-4144-43e0-bd5c-08069ec0b636-kube-proxy\") pod \"kube-proxy-th9vn\" (UID: \"7fa5c683-4144-43e0-bd5c-08069ec0b636\") " pod="kube-system/kube-proxy-th9vn" Jul 7 00:15:55.246545 kubelet[3220]: I0707 00:15:55.246348 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fa5c683-4144-43e0-bd5c-08069ec0b636-xtables-lock\") pod \"kube-proxy-th9vn\" (UID: \"7fa5c683-4144-43e0-bd5c-08069ec0b636\") " pod="kube-system/kube-proxy-th9vn" Jul 7 00:15:55.246545 kubelet[3220]: I0707 00:15:55.246378 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fa5c683-4144-43e0-bd5c-08069ec0b636-lib-modules\") pod \"kube-proxy-th9vn\" (UID: \"7fa5c683-4144-43e0-bd5c-08069ec0b636\") " pod="kube-system/kube-proxy-th9vn" Jul 7 00:15:55.349188 kubelet[3220]: I0707 00:15:55.347322 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-lib-modules\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353561 kubelet[3220]: I0707 00:15:55.349681 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-xtables-lock\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353561 kubelet[3220]: I0707 00:15:55.349723 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-net\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353561 kubelet[3220]: I0707 00:15:55.349747 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjsz\" (UniqueName: \"kubernetes.io/projected/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-kube-api-access-vsjsz\") pod \"cilium-operator-6c4d7847fc-q6mdf\" (UID: \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\") " pod="kube-system/cilium-operator-6c4d7847fc-q6mdf" Jul 7 00:15:55.353561 kubelet[3220]: I0707 00:15:55.349801 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-run\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353561 kubelet[3220]: I0707 00:15:55.349831 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-config-path\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349853 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-kernel\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349890 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cni-path\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349912 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-etc-cni-netd\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349935 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qfn\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-kube-api-access-l9qfn\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349974 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-cgroup\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.353875 kubelet[3220]: I0707 00:15:55.349995 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-hubble-tls\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.354122 kubelet[3220]: I0707 00:15:55.350032 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-bpf-maps\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.354122 kubelet[3220]: I0707 00:15:55.350055 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52118ea3-37d9-4b63-82be-dfd4b040e547-clustermesh-secrets\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.354122 kubelet[3220]: I0707 00:15:55.350082 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-q6mdf\" (UID: \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\") " pod="kube-system/cilium-operator-6c4d7847fc-q6mdf" Jul 7 00:15:55.354122 kubelet[3220]: I0707 00:15:55.350104 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-hostproc\") pod \"cilium-8dpr6\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " pod="kube-system/cilium-8dpr6" Jul 7 00:15:55.355080 systemd[1]: Created slice kubepods-burstable-pod52118ea3_37d9_4b63_82be_dfd4b040e547.slice - libcontainer container kubepods-burstable-pod52118ea3_37d9_4b63_82be_dfd4b040e547.slice. Jul 7 00:15:55.370360 systemd[1]: Created slice kubepods-besteffort-pod230ef0fe_2300_42a0_afe2_ab9c0a1d8586.slice - libcontainer container kubepods-besteffort-pod230ef0fe_2300_42a0_afe2_ab9c0a1d8586.slice. Jul 7 00:15:55.617717 containerd[1908]: time="2025-07-07T00:15:55.617220284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-th9vn,Uid:7fa5c683-4144-43e0-bd5c-08069ec0b636,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:55.668222 containerd[1908]: time="2025-07-07T00:15:55.668051985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dpr6,Uid:52118ea3-37d9-4b63-82be-dfd4b040e547,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:55.677809 containerd[1908]: time="2025-07-07T00:15:55.677737342Z" level=info msg="connecting to shim 13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6" address="unix:///run/containerd/s/80c4667328c14ae9d28e68e79e3fd02ffc94f2adf3bd4c68f995499c09ca9655" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:55.680236 containerd[1908]: time="2025-07-07T00:15:55.680147663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q6mdf,Uid:230ef0fe-2300-42a0-afe2-ab9c0a1d8586,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:55.730135 containerd[1908]: time="2025-07-07T00:15:55.730081545Z" level=info msg="connecting to shim 563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:55.730485 systemd[1]: Started cri-containerd-13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6.scope - libcontainer container 13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6. Jul 7 00:15:55.774132 containerd[1908]: time="2025-07-07T00:15:55.774077501Z" level=info msg="connecting to shim 05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7" address="unix:///run/containerd/s/7721465ed27870ee3b56ac3d50658d2eaf0bc735f7f360ce933c81c9cb8a35d9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:55.776485 systemd[1]: Started cri-containerd-563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe.scope - libcontainer container 563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe. Jul 7 00:15:55.813229 containerd[1908]: time="2025-07-07T00:15:55.812214162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-th9vn,Uid:7fa5c683-4144-43e0-bd5c-08069ec0b636,Namespace:kube-system,Attempt:0,} returns sandbox id \"13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6\"" Jul 7 00:15:55.819443 containerd[1908]: time="2025-07-07T00:15:55.819386596Z" level=info msg="CreateContainer within sandbox \"13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:15:55.848442 systemd[1]: Started cri-containerd-05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7.scope - libcontainer container 05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7. Jul 7 00:15:55.852775 containerd[1908]: time="2025-07-07T00:15:55.852439491Z" level=info msg="Container e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:55.860021 containerd[1908]: time="2025-07-07T00:15:55.859474516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dpr6,Uid:52118ea3-37d9-4b63-82be-dfd4b040e547,Namespace:kube-system,Attempt:0,} returns sandbox id \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\"" Jul 7 00:15:55.865002 containerd[1908]: time="2025-07-07T00:15:55.864929112Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:15:55.885693 containerd[1908]: time="2025-07-07T00:15:55.884635842Z" level=info msg="CreateContainer within sandbox \"13df60ec71e6bc3ae2321522a6fc4327b95a055f0378f1a388443d9b588cb8c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe\"" Jul 7 00:15:55.892745 containerd[1908]: time="2025-07-07T00:15:55.892697485Z" level=info msg="StartContainer for \"e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe\"" Jul 7 00:15:55.897551 containerd[1908]: time="2025-07-07T00:15:55.897434426Z" level=info msg="connecting to shim e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe" address="unix:///run/containerd/s/80c4667328c14ae9d28e68e79e3fd02ffc94f2adf3bd4c68f995499c09ca9655" protocol=ttrpc version=3 Jul 7 00:15:55.935516 systemd[1]: Started cri-containerd-e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe.scope - libcontainer container e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe. Jul 7 00:15:55.990042 containerd[1908]: time="2025-07-07T00:15:55.989838245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q6mdf,Uid:230ef0fe-2300-42a0-afe2-ab9c0a1d8586,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\"" Jul 7 00:15:56.012945 containerd[1908]: time="2025-07-07T00:15:56.012901804Z" level=info msg="StartContainer for \"e0162a4f1568ed52bb4cc414d05373d9a3448d8ff801fac6020d12671db924fe\" returns successfully" Jul 7 00:15:56.649289 kubelet[3220]: I0707 00:15:56.648677 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-th9vn" podStartSLOduration=1.6486584789999998 podStartE2EDuration="1.648658479s" podCreationTimestamp="2025-07-07 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:56.648469846 +0000 UTC m=+6.335325579" watchObservedRunningTime="2025-07-07 00:15:56.648658479 +0000 UTC m=+6.335514213" Jul 7 00:16:00.596679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316438439.mount: Deactivated successfully. Jul 7 00:16:04.391773 containerd[1908]: time="2025-07-07T00:16:04.391699029Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:04.394127 containerd[1908]: time="2025-07-07T00:16:04.394080748Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:16:04.396218 containerd[1908]: time="2025-07-07T00:16:04.396042543Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:04.403372 containerd[1908]: time="2025-07-07T00:16:04.403272759Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.538281375s" Jul 7 00:16:04.403697 containerd[1908]: time="2025-07-07T00:16:04.403571688Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:16:04.405413 containerd[1908]: time="2025-07-07T00:16:04.405371976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:16:04.408140 containerd[1908]: time="2025-07-07T00:16:04.408095855Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:16:04.506050 containerd[1908]: time="2025-07-07T00:16:04.505385832Z" level=info msg="Container 6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:04.528842 containerd[1908]: time="2025-07-07T00:16:04.528777602Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\"" Jul 7 00:16:04.529979 containerd[1908]: time="2025-07-07T00:16:04.529915265Z" level=info msg="StartContainer for \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\"" Jul 7 00:16:04.532101 containerd[1908]: time="2025-07-07T00:16:04.532040725Z" level=info msg="connecting to shim 6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" protocol=ttrpc version=3 Jul 7 00:16:04.611460 systemd[1]: Started cri-containerd-6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726.scope - libcontainer container 6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726. Jul 7 00:16:04.659353 containerd[1908]: time="2025-07-07T00:16:04.659074954Z" level=info msg="StartContainer for \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" returns successfully" Jul 7 00:16:04.675909 systemd[1]: cri-containerd-6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726.scope: Deactivated successfully. Jul 7 00:16:04.763267 containerd[1908]: time="2025-07-07T00:16:04.762866017Z" level=info msg="received exit event container_id:\"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" id:\"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" pid:3816 exited_at:{seconds:1751847364 nanos:678771659}" Jul 7 00:16:04.764547 containerd[1908]: time="2025-07-07T00:16:04.764316144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" id:\"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" pid:3816 exited_at:{seconds:1751847364 nanos:678771659}" Jul 7 00:16:04.819348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726-rootfs.mount: Deactivated successfully. Jul 7 00:16:05.688628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278218093.mount: Deactivated successfully. Jul 7 00:16:05.788322 containerd[1908]: time="2025-07-07T00:16:05.786659885Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:16:05.814136 containerd[1908]: time="2025-07-07T00:16:05.814088940Z" level=info msg="Container 13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:05.820601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661646529.mount: Deactivated successfully. Jul 7 00:16:05.834075 containerd[1908]: time="2025-07-07T00:16:05.834022241Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\"" Jul 7 00:16:05.836112 containerd[1908]: time="2025-07-07T00:16:05.835918625Z" level=info msg="StartContainer for \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\"" Jul 7 00:16:05.838681 containerd[1908]: time="2025-07-07T00:16:05.838568445Z" level=info msg="connecting to shim 13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" protocol=ttrpc version=3 Jul 7 00:16:05.879774 systemd[1]: Started cri-containerd-13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9.scope - libcontainer container 13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9. Jul 7 00:16:05.962617 containerd[1908]: time="2025-07-07T00:16:05.961831166Z" level=info msg="StartContainer for \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" returns successfully" Jul 7 00:16:05.982059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:16:05.982460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:16:05.983319 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:16:05.988327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:16:05.996114 systemd[1]: cri-containerd-13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9.scope: Deactivated successfully. Jul 7 00:16:05.998996 containerd[1908]: time="2025-07-07T00:16:05.998950778Z" level=info msg="received exit event container_id:\"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" id:\"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" pid:3874 exited_at:{seconds:1751847365 nanos:997947796}" Jul 7 00:16:06.000847 containerd[1908]: time="2025-07-07T00:16:06.000698043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" id:\"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" pid:3874 exited_at:{seconds:1751847365 nanos:997947796}" Jul 7 00:16:06.028864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:16:06.485530 containerd[1908]: time="2025-07-07T00:16:06.485466584Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:06.487416 containerd[1908]: time="2025-07-07T00:16:06.487174163Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:16:06.489576 containerd[1908]: time="2025-07-07T00:16:06.489535637Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:06.490967 containerd[1908]: time="2025-07-07T00:16:06.490935161Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.085520633s" Jul 7 00:16:06.491095 containerd[1908]: time="2025-07-07T00:16:06.491079945Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:16:06.493675 containerd[1908]: time="2025-07-07T00:16:06.493641377Z" level=info msg="CreateContainer within sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:16:06.509394 containerd[1908]: time="2025-07-07T00:16:06.509348350Z" level=info msg="Container 112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:06.522467 containerd[1908]: time="2025-07-07T00:16:06.522404789Z" level=info msg="CreateContainer within sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\"" Jul 7 00:16:06.523071 containerd[1908]: time="2025-07-07T00:16:06.523031018Z" level=info msg="StartContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\"" Jul 7 00:16:06.524093 containerd[1908]: time="2025-07-07T00:16:06.523941361Z" level=info msg="connecting to shim 112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef" address="unix:///run/containerd/s/7721465ed27870ee3b56ac3d50658d2eaf0bc735f7f360ce933c81c9cb8a35d9" protocol=ttrpc version=3 Jul 7 00:16:06.545482 systemd[1]: Started cri-containerd-112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef.scope - libcontainer container 112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef. Jul 7 00:16:06.585924 containerd[1908]: time="2025-07-07T00:16:06.585887766Z" level=info msg="StartContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" returns successfully" Jul 7 00:16:06.676560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9-rootfs.mount: Deactivated successfully. Jul 7 00:16:06.796275 containerd[1908]: time="2025-07-07T00:16:06.796004505Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:16:06.826237 containerd[1908]: time="2025-07-07T00:16:06.822631616Z" level=info msg="Container f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:06.846858 containerd[1908]: time="2025-07-07T00:16:06.846806767Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\"" Jul 7 00:16:06.848725 containerd[1908]: time="2025-07-07T00:16:06.848687294Z" level=info msg="StartContainer for \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\"" Jul 7 00:16:06.851447 containerd[1908]: time="2025-07-07T00:16:06.851398861Z" level=info msg="connecting to shim f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" protocol=ttrpc version=3 Jul 7 00:16:06.927400 systemd[1]: Started cri-containerd-f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3.scope - libcontainer container f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3. Jul 7 00:16:06.968344 kubelet[3220]: I0707 00:16:06.968279 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-q6mdf" podStartSLOduration=1.469125347 podStartE2EDuration="11.968252222s" podCreationTimestamp="2025-07-07 00:15:55 +0000 UTC" firstStartedPulling="2025-07-07 00:15:55.992828259 +0000 UTC m=+5.679683984" lastFinishedPulling="2025-07-07 00:16:06.491955147 +0000 UTC m=+16.178810859" observedRunningTime="2025-07-07 00:16:06.877105395 +0000 UTC m=+16.563961131" watchObservedRunningTime="2025-07-07 00:16:06.968252222 +0000 UTC m=+16.655107954" Jul 7 00:16:07.032475 containerd[1908]: time="2025-07-07T00:16:07.032326148Z" level=info msg="StartContainer for \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" returns successfully" Jul 7 00:16:07.041494 systemd[1]: cri-containerd-f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3.scope: Deactivated successfully. Jul 7 00:16:07.042116 systemd[1]: cri-containerd-f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3.scope: Consumed 38ms CPU time, 3.8M memory peak, 1.2M read from disk. Jul 7 00:16:07.044240 containerd[1908]: time="2025-07-07T00:16:07.043963708Z" level=info msg="received exit event container_id:\"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" id:\"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" pid:3961 exited_at:{seconds:1751847367 nanos:42941585}" Jul 7 00:16:07.046673 containerd[1908]: time="2025-07-07T00:16:07.046189870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" id:\"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" pid:3961 exited_at:{seconds:1751847367 nanos:42941585}" Jul 7 00:16:07.106670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3-rootfs.mount: Deactivated successfully. Jul 7 00:16:07.804083 containerd[1908]: time="2025-07-07T00:16:07.804016462Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:16:07.832791 containerd[1908]: time="2025-07-07T00:16:07.832739603Z" level=info msg="Container 82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:07.873330 containerd[1908]: time="2025-07-07T00:16:07.873162892Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\"" Jul 7 00:16:07.875238 containerd[1908]: time="2025-07-07T00:16:07.875130557Z" level=info msg="StartContainer for \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\"" Jul 7 00:16:07.876954 containerd[1908]: time="2025-07-07T00:16:07.876871414Z" level=info msg="connecting to shim 82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" protocol=ttrpc version=3 Jul 7 00:16:07.907679 systemd[1]: Started cri-containerd-82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d.scope - libcontainer container 82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d. Jul 7 00:16:07.944492 systemd[1]: cri-containerd-82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d.scope: Deactivated successfully. Jul 7 00:16:07.947021 containerd[1908]: time="2025-07-07T00:16:07.946530452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" id:\"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" pid:4002 exited_at:{seconds:1751847367 nanos:945700809}" Jul 7 00:16:07.949177 containerd[1908]: time="2025-07-07T00:16:07.948013167Z" level=info msg="received exit event container_id:\"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" id:\"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" pid:4002 exited_at:{seconds:1751847367 nanos:945700809}" Jul 7 00:16:07.959041 containerd[1908]: time="2025-07-07T00:16:07.958995523Z" level=info msg="StartContainer for \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" returns successfully" Jul 7 00:16:07.977744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d-rootfs.mount: Deactivated successfully. Jul 7 00:16:08.811692 containerd[1908]: time="2025-07-07T00:16:08.811401314Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:16:08.853258 containerd[1908]: time="2025-07-07T00:16:08.851085081Z" level=info msg="Container 7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:08.866406 containerd[1908]: time="2025-07-07T00:16:08.866353871Z" level=info msg="CreateContainer within sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\"" Jul 7 00:16:08.866961 containerd[1908]: time="2025-07-07T00:16:08.866919062Z" level=info msg="StartContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\"" Jul 7 00:16:08.867838 containerd[1908]: time="2025-07-07T00:16:08.867810265Z" level=info msg="connecting to shim 7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4" address="unix:///run/containerd/s/63e6f640722fe8ff363c681dcc1cfce9503d9f145adcb3c02b0e98897d4d224c" protocol=ttrpc version=3 Jul 7 00:16:08.896699 systemd[1]: Started cri-containerd-7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4.scope - libcontainer container 7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4. Jul 7 00:16:08.955396 containerd[1908]: time="2025-07-07T00:16:08.955348838Z" level=info msg="StartContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" returns successfully" Jul 7 00:16:09.097448 containerd[1908]: time="2025-07-07T00:16:09.097240673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" id:\"673d5339edd02bb817e2832b102a0f1eb6df1b82e067df6ecc58af9e898f5e9e\" pid:4071 exited_at:{seconds:1751847369 nanos:95527126}" Jul 7 00:16:09.196892 kubelet[3220]: I0707 00:16:09.196841 3220 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:16:09.266189 systemd[1]: Created slice kubepods-burstable-pod95043214_8bcf_46eb_a8aa_5f4babbeeb72.slice - libcontainer container kubepods-burstable-pod95043214_8bcf_46eb_a8aa_5f4babbeeb72.slice. Jul 7 00:16:09.280081 systemd[1]: Created slice kubepods-burstable-podf190a489_92fc_4bad_a9b6_52324e33ca3a.slice - libcontainer container kubepods-burstable-podf190a489_92fc_4bad_a9b6_52324e33ca3a.slice. Jul 7 00:16:09.302336 kubelet[3220]: I0707 00:16:09.302285 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95043214-8bcf-46eb-a8aa-5f4babbeeb72-config-volume\") pod \"coredns-668d6bf9bc-drm2l\" (UID: \"95043214-8bcf-46eb-a8aa-5f4babbeeb72\") " pod="kube-system/coredns-668d6bf9bc-drm2l" Jul 7 00:16:09.302336 kubelet[3220]: I0707 00:16:09.302332 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzfw\" (UniqueName: \"kubernetes.io/projected/95043214-8bcf-46eb-a8aa-5f4babbeeb72-kube-api-access-8wzfw\") pod \"coredns-668d6bf9bc-drm2l\" (UID: \"95043214-8bcf-46eb-a8aa-5f4babbeeb72\") " pod="kube-system/coredns-668d6bf9bc-drm2l" Jul 7 00:16:09.403172 kubelet[3220]: I0707 00:16:09.402681 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f190a489-92fc-4bad-a9b6-52324e33ca3a-config-volume\") pod \"coredns-668d6bf9bc-qx7gx\" (UID: \"f190a489-92fc-4bad-a9b6-52324e33ca3a\") " pod="kube-system/coredns-668d6bf9bc-qx7gx" Jul 7 00:16:09.403172 kubelet[3220]: I0707 00:16:09.402740 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5pw\" (UniqueName: \"kubernetes.io/projected/f190a489-92fc-4bad-a9b6-52324e33ca3a-kube-api-access-qq5pw\") pod \"coredns-668d6bf9bc-qx7gx\" (UID: \"f190a489-92fc-4bad-a9b6-52324e33ca3a\") " pod="kube-system/coredns-668d6bf9bc-qx7gx" Jul 7 00:16:09.575914 containerd[1908]: time="2025-07-07T00:16:09.575863220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drm2l,Uid:95043214-8bcf-46eb-a8aa-5f4babbeeb72,Namespace:kube-system,Attempt:0,}" Jul 7 00:16:09.596667 containerd[1908]: time="2025-07-07T00:16:09.596329146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qx7gx,Uid:f190a489-92fc-4bad-a9b6-52324e33ca3a,Namespace:kube-system,Attempt:0,}" Jul 7 00:16:11.622812 (udev-worker)[4133]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:16:11.626187 systemd-networkd[1813]: cilium_host: Link UP Jul 7 00:16:11.627287 systemd-networkd[1813]: cilium_net: Link UP Jul 7 00:16:11.627817 (udev-worker)[4132]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:16:11.629563 systemd-networkd[1813]: cilium_net: Gained carrier Jul 7 00:16:11.629742 systemd-networkd[1813]: cilium_host: Gained carrier Jul 7 00:16:11.630574 systemd-networkd[1813]: cilium_net: Gained IPv6LL Jul 7 00:16:11.755153 (udev-worker)[4186]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:16:11.777381 systemd-networkd[1813]: cilium_vxlan: Link UP Jul 7 00:16:11.777394 systemd-networkd[1813]: cilium_vxlan: Gained carrier Jul 7 00:16:12.067417 systemd-networkd[1813]: cilium_host: Gained IPv6LL Jul 7 00:16:12.810310 kernel: NET: Registered PF_ALG protocol family Jul 7 00:16:12.828774 systemd-networkd[1813]: cilium_vxlan: Gained IPv6LL Jul 7 00:16:13.593769 systemd-networkd[1813]: lxc_health: Link UP Jul 7 00:16:13.605328 systemd-networkd[1813]: lxc_health: Gained carrier Jul 7 00:16:13.712658 kubelet[3220]: I0707 00:16:13.712191 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8dpr6" podStartSLOduration=10.170531828 podStartE2EDuration="18.712169s" podCreationTimestamp="2025-07-07 00:15:55 +0000 UTC" firstStartedPulling="2025-07-07 00:15:55.863418876 +0000 UTC m=+5.550274591" lastFinishedPulling="2025-07-07 00:16:04.405056035 +0000 UTC m=+14.091911763" observedRunningTime="2025-07-07 00:16:09.860857976 +0000 UTC m=+19.547713710" watchObservedRunningTime="2025-07-07 00:16:13.712169 +0000 UTC m=+23.399024735" Jul 7 00:16:14.173876 systemd-networkd[1813]: lxc53fd586f2593: Link UP Jul 7 00:16:14.175415 kernel: eth0: renamed from tmp33fb5 Jul 7 00:16:14.180268 systemd-networkd[1813]: lxc53fd586f2593: Gained carrier Jul 7 00:16:14.190742 systemd-networkd[1813]: lxcd9e7da4a0fa5: Link UP Jul 7 00:16:14.196748 (udev-worker)[4188]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:16:14.200185 kernel: eth0: renamed from tmpaf655 Jul 7 00:16:14.204021 systemd-networkd[1813]: lxcd9e7da4a0fa5: Gained carrier Jul 7 00:16:15.259496 systemd-networkd[1813]: lxc_health: Gained IPv6LL Jul 7 00:16:15.771641 systemd-networkd[1813]: lxc53fd586f2593: Gained IPv6LL Jul 7 00:16:15.899554 systemd-networkd[1813]: lxcd9e7da4a0fa5: Gained IPv6LL Jul 7 00:16:18.089177 ntpd[1851]: Listen normally on 7 cilium_host 192.168.0.5:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 7 cilium_host 192.168.0.5:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 8 cilium_net [fe80::c816:c3ff:fec8:95fe%4]:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 9 cilium_host [fe80::b844:a4ff:fea4:8173%5]:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 10 cilium_vxlan [fe80::6048:9fff:fed3:9da6%6]:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 11 lxc_health [fe80::f4a2:37ff:fe9c:79a5%8]:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 12 lxc53fd586f2593 [fe80::3c33:98ff:fe63:7e13%10]:123 Jul 7 00:16:18.090158 ntpd[1851]: 7 Jul 00:16:18 ntpd[1851]: Listen normally on 13 lxcd9e7da4a0fa5 [fe80::747c:9fff:fed7:da75%12]:123 Jul 7 00:16:18.089316 ntpd[1851]: Listen normally on 8 cilium_net [fe80::c816:c3ff:fec8:95fe%4]:123 Jul 7 00:16:18.089376 ntpd[1851]: Listen normally on 9 cilium_host [fe80::b844:a4ff:fea4:8173%5]:123 Jul 7 00:16:18.089419 ntpd[1851]: Listen normally on 10 cilium_vxlan [fe80::6048:9fff:fed3:9da6%6]:123 Jul 7 00:16:18.089459 ntpd[1851]: Listen normally on 11 lxc_health [fe80::f4a2:37ff:fe9c:79a5%8]:123 Jul 7 00:16:18.089499 ntpd[1851]: Listen normally on 12 lxc53fd586f2593 [fe80::3c33:98ff:fe63:7e13%10]:123 Jul 7 00:16:18.089537 ntpd[1851]: Listen normally on 13 lxcd9e7da4a0fa5 [fe80::747c:9fff:fed7:da75%12]:123 Jul 7 00:16:18.684781 containerd[1908]: time="2025-07-07T00:16:18.684651517Z" level=info msg="connecting to shim af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c" address="unix:///run/containerd/s/29700eea5621ab766012740080e32121797297e5f02e7b4d5823f8126e57c0a9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:16:18.751954 containerd[1908]: time="2025-07-07T00:16:18.751686098Z" level=info msg="connecting to shim 33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0" address="unix:///run/containerd/s/2caefb68d80b24ad0de4419ceb1713716f556bdc3bf2a2463105c27540ffe627" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:16:18.760703 systemd[1]: Started cri-containerd-af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c.scope - libcontainer container af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c. Jul 7 00:16:18.840466 systemd[1]: Started cri-containerd-33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0.scope - libcontainer container 33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0. Jul 7 00:16:18.930760 containerd[1908]: time="2025-07-07T00:16:18.930671722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qx7gx,Uid:f190a489-92fc-4bad-a9b6-52324e33ca3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c\"" Jul 7 00:16:18.947062 containerd[1908]: time="2025-07-07T00:16:18.946609743Z" level=info msg="CreateContainer within sandbox \"af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:16:18.971187 containerd[1908]: time="2025-07-07T00:16:18.971129889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drm2l,Uid:95043214-8bcf-46eb-a8aa-5f4babbeeb72,Namespace:kube-system,Attempt:0,} returns sandbox id \"33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0\"" Jul 7 00:16:18.979164 containerd[1908]: time="2025-07-07T00:16:18.979100643Z" level=info msg="CreateContainer within sandbox \"33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:16:18.984660 containerd[1908]: time="2025-07-07T00:16:18.984596889Z" level=info msg="Container 41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:19.000122 containerd[1908]: time="2025-07-07T00:16:19.000057387Z" level=info msg="CreateContainer within sandbox \"af655679e8745ffafb602b82f1ba259991e5f5db5e9642614381370bf8f4eb0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a\"" Jul 7 00:16:19.001577 containerd[1908]: time="2025-07-07T00:16:19.001530089Z" level=info msg="StartContainer for \"41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a\"" Jul 7 00:16:19.004646 containerd[1908]: time="2025-07-07T00:16:19.004508130Z" level=info msg="Container 4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:16:19.004836 containerd[1908]: time="2025-07-07T00:16:19.004594036Z" level=info msg="connecting to shim 41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a" address="unix:///run/containerd/s/29700eea5621ab766012740080e32121797297e5f02e7b4d5823f8126e57c0a9" protocol=ttrpc version=3 Jul 7 00:16:19.018293 containerd[1908]: time="2025-07-07T00:16:19.018165264Z" level=info msg="CreateContainer within sandbox \"33fb5a49d4a8d2cd85953a77fd796ff0278b821dc082fe589da0cb276553e9f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270\"" Jul 7 00:16:19.021778 containerd[1908]: time="2025-07-07T00:16:19.021582867Z" level=info msg="StartContainer for \"4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270\"" Jul 7 00:16:19.023138 containerd[1908]: time="2025-07-07T00:16:19.023100986Z" level=info msg="connecting to shim 4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270" address="unix:///run/containerd/s/2caefb68d80b24ad0de4419ceb1713716f556bdc3bf2a2463105c27540ffe627" protocol=ttrpc version=3 Jul 7 00:16:19.034729 systemd[1]: Started cri-containerd-41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a.scope - libcontainer container 41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a. Jul 7 00:16:19.051599 systemd[1]: Started cri-containerd-4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270.scope - libcontainer container 4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270. Jul 7 00:16:19.097514 containerd[1908]: time="2025-07-07T00:16:19.097451794Z" level=info msg="StartContainer for \"4b34bf30f9539dd3fdeecba2717ed98e8e6a258e9a1adbd7a454c313bdf07270\" returns successfully" Jul 7 00:16:19.098403 containerd[1908]: time="2025-07-07T00:16:19.098362869Z" level=info msg="StartContainer for \"41fcfbd02217c68f73b9bbc9ca219c3417b1fbdbd25c43cbb43b76615323763a\" returns successfully" Jul 7 00:16:19.668508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941750928.mount: Deactivated successfully. Jul 7 00:16:19.927976 kubelet[3220]: I0707 00:16:19.927663 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qx7gx" podStartSLOduration=24.927647834 podStartE2EDuration="24.927647834s" podCreationTimestamp="2025-07-07 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:16:19.926451301 +0000 UTC m=+29.613307035" watchObservedRunningTime="2025-07-07 00:16:19.927647834 +0000 UTC m=+29.614503568" Jul 7 00:16:24.214620 kubelet[3220]: I0707 00:16:24.214581 3220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:16:24.239827 kubelet[3220]: I0707 00:16:24.239692 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-drm2l" podStartSLOduration=29.239667067 podStartE2EDuration="29.239667067s" podCreationTimestamp="2025-07-07 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:16:19.959482721 +0000 UTC m=+29.646338512" watchObservedRunningTime="2025-07-07 00:16:24.239667067 +0000 UTC m=+33.926522796" Jul 7 00:16:37.957396 systemd[1]: Started sshd@9-172.31.30.121:22-147.75.109.163:41150.service - OpenSSH per-connection server daemon (147.75.109.163:41150). Jul 7 00:16:38.222610 sshd[4714]: Accepted publickey for core from 147.75.109.163 port 41150 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:38.225385 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:38.251373 systemd-logind[1861]: New session 10 of user core. Jul 7 00:16:38.258816 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:16:39.195766 sshd[4716]: Connection closed by 147.75.109.163 port 41150 Jul 7 00:16:39.197456 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:39.201684 systemd[1]: sshd@9-172.31.30.121:22-147.75.109.163:41150.service: Deactivated successfully. Jul 7 00:16:39.206054 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:16:39.208237 systemd-logind[1861]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:16:39.210438 systemd-logind[1861]: Removed session 10. Jul 7 00:16:44.231976 systemd[1]: Started sshd@10-172.31.30.121:22-147.75.109.163:41160.service - OpenSSH per-connection server daemon (147.75.109.163:41160). Jul 7 00:16:44.410597 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 41160 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:44.413580 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:44.418778 systemd-logind[1861]: New session 11 of user core. Jul 7 00:16:44.429448 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:16:44.646233 sshd[4730]: Connection closed by 147.75.109.163 port 41160 Jul 7 00:16:44.646848 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:44.651503 systemd[1]: sshd@10-172.31.30.121:22-147.75.109.163:41160.service: Deactivated successfully. Jul 7 00:16:44.654913 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:16:44.656437 systemd-logind[1861]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:16:44.658420 systemd-logind[1861]: Removed session 11. Jul 7 00:16:49.681896 systemd[1]: Started sshd@11-172.31.30.121:22-147.75.109.163:39546.service - OpenSSH per-connection server daemon (147.75.109.163:39546). Jul 7 00:16:49.859101 sshd[4745]: Accepted publickey for core from 147.75.109.163 port 39546 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:49.860695 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:49.866291 systemd-logind[1861]: New session 12 of user core. Jul 7 00:16:49.872433 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:16:50.068256 sshd[4747]: Connection closed by 147.75.109.163 port 39546 Jul 7 00:16:50.069453 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:50.074138 systemd[1]: sshd@11-172.31.30.121:22-147.75.109.163:39546.service: Deactivated successfully. Jul 7 00:16:50.076750 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:16:50.078027 systemd-logind[1861]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:16:50.079719 systemd-logind[1861]: Removed session 12. Jul 7 00:16:55.107480 systemd[1]: Started sshd@12-172.31.30.121:22-147.75.109.163:39550.service - OpenSSH per-connection server daemon (147.75.109.163:39550). Jul 7 00:16:55.299253 sshd[4762]: Accepted publickey for core from 147.75.109.163 port 39550 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:55.300561 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:55.307013 systemd-logind[1861]: New session 13 of user core. Jul 7 00:16:55.314491 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:16:55.504317 sshd[4764]: Connection closed by 147.75.109.163 port 39550 Jul 7 00:16:55.505241 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:55.509220 systemd-logind[1861]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:16:55.511591 systemd[1]: sshd@12-172.31.30.121:22-147.75.109.163:39550.service: Deactivated successfully. Jul 7 00:16:55.513645 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:16:55.516368 systemd-logind[1861]: Removed session 13. Jul 7 00:16:55.539311 systemd[1]: Started sshd@13-172.31.30.121:22-147.75.109.163:39566.service - OpenSSH per-connection server daemon (147.75.109.163:39566). Jul 7 00:16:55.708267 sshd[4776]: Accepted publickey for core from 147.75.109.163 port 39566 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:55.709469 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:55.715279 systemd-logind[1861]: New session 14 of user core. Jul 7 00:16:55.724476 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:16:55.965670 sshd[4778]: Connection closed by 147.75.109.163 port 39566 Jul 7 00:16:55.966253 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:55.972880 systemd[1]: sshd@13-172.31.30.121:22-147.75.109.163:39566.service: Deactivated successfully. Jul 7 00:16:55.978741 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:16:55.982511 systemd-logind[1861]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:16:55.987850 systemd-logind[1861]: Removed session 14. Jul 7 00:16:56.003691 systemd[1]: Started sshd@14-172.31.30.121:22-147.75.109.163:42600.service - OpenSSH per-connection server daemon (147.75.109.163:42600). Jul 7 00:16:56.193273 sshd[4788]: Accepted publickey for core from 147.75.109.163 port 42600 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:16:56.194766 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:56.201833 systemd-logind[1861]: New session 15 of user core. Jul 7 00:16:56.208452 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:16:56.399348 sshd[4790]: Connection closed by 147.75.109.163 port 42600 Jul 7 00:16:56.400601 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:56.407479 systemd-logind[1861]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:16:56.407691 systemd[1]: sshd@14-172.31.30.121:22-147.75.109.163:42600.service: Deactivated successfully. Jul 7 00:16:56.410566 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:16:56.415042 systemd-logind[1861]: Removed session 15. Jul 7 00:17:01.448545 systemd[1]: Started sshd@15-172.31.30.121:22-147.75.109.163:42604.service - OpenSSH per-connection server daemon (147.75.109.163:42604). Jul 7 00:17:01.735540 sshd[4805]: Accepted publickey for core from 147.75.109.163 port 42604 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:01.739862 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:01.751327 systemd-logind[1861]: New session 16 of user core. Jul 7 00:17:01.766234 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:17:02.241052 sshd[4807]: Connection closed by 147.75.109.163 port 42604 Jul 7 00:17:02.242660 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:02.264345 systemd[1]: sshd@15-172.31.30.121:22-147.75.109.163:42604.service: Deactivated successfully. Jul 7 00:17:02.277742 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:17:02.283595 systemd-logind[1861]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:17:02.286932 systemd-logind[1861]: Removed session 16. Jul 7 00:17:07.282422 systemd[1]: Started sshd@16-172.31.30.121:22-147.75.109.163:54946.service - OpenSSH per-connection server daemon (147.75.109.163:54946). Jul 7 00:17:07.459950 sshd[4819]: Accepted publickey for core from 147.75.109.163 port 54946 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:07.461866 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:07.470805 systemd-logind[1861]: New session 17 of user core. Jul 7 00:17:07.475429 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:17:07.671122 sshd[4821]: Connection closed by 147.75.109.163 port 54946 Jul 7 00:17:07.671779 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:07.677777 systemd[1]: sshd@16-172.31.30.121:22-147.75.109.163:54946.service: Deactivated successfully. Jul 7 00:17:07.681575 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:17:07.684366 systemd-logind[1861]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:17:07.687505 systemd-logind[1861]: Removed session 17. Jul 7 00:17:12.705009 systemd[1]: Started sshd@17-172.31.30.121:22-147.75.109.163:54950.service - OpenSSH per-connection server daemon (147.75.109.163:54950). Jul 7 00:17:12.874271 sshd[4833]: Accepted publickey for core from 147.75.109.163 port 54950 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:12.875706 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:12.882875 systemd-logind[1861]: New session 18 of user core. Jul 7 00:17:12.889503 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:17:13.082469 sshd[4835]: Connection closed by 147.75.109.163 port 54950 Jul 7 00:17:13.083496 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:13.089506 systemd[1]: sshd@17-172.31.30.121:22-147.75.109.163:54950.service: Deactivated successfully. Jul 7 00:17:13.092761 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:17:13.094288 systemd-logind[1861]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:17:13.096790 systemd-logind[1861]: Removed session 18. Jul 7 00:17:13.119553 systemd[1]: Started sshd@18-172.31.30.121:22-147.75.109.163:54956.service - OpenSSH per-connection server daemon (147.75.109.163:54956). Jul 7 00:17:13.295181 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 54956 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:13.296898 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:13.303497 systemd-logind[1861]: New session 19 of user core. Jul 7 00:17:13.313524 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:17:13.994352 sshd[4849]: Connection closed by 147.75.109.163 port 54956 Jul 7 00:17:13.995615 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:14.014891 systemd-logind[1861]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:17:14.015187 systemd[1]: sshd@18-172.31.30.121:22-147.75.109.163:54956.service: Deactivated successfully. Jul 7 00:17:14.026283 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:17:14.038443 systemd[1]: Started sshd@19-172.31.30.121:22-147.75.109.163:54964.service - OpenSSH per-connection server daemon (147.75.109.163:54964). Jul 7 00:17:14.039297 systemd-logind[1861]: Removed session 19. Jul 7 00:17:14.232919 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 54964 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:14.234463 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:14.240771 systemd-logind[1861]: New session 20 of user core. Jul 7 00:17:14.246639 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:17:15.220373 sshd[4861]: Connection closed by 147.75.109.163 port 54964 Jul 7 00:17:15.221118 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:15.230626 systemd[1]: sshd@19-172.31.30.121:22-147.75.109.163:54964.service: Deactivated successfully. Jul 7 00:17:15.231488 systemd-logind[1861]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:17:15.238170 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:17:15.244231 systemd-logind[1861]: Removed session 20. Jul 7 00:17:15.260510 systemd[1]: Started sshd@20-172.31.30.121:22-147.75.109.163:54978.service - OpenSSH per-connection server daemon (147.75.109.163:54978). Jul 7 00:17:15.443115 sshd[4879]: Accepted publickey for core from 147.75.109.163 port 54978 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:15.445256 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:15.451072 systemd-logind[1861]: New session 21 of user core. Jul 7 00:17:15.459436 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:17:15.816123 sshd[4881]: Connection closed by 147.75.109.163 port 54978 Jul 7 00:17:15.816598 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:15.823387 systemd[1]: sshd@20-172.31.30.121:22-147.75.109.163:54978.service: Deactivated successfully. Jul 7 00:17:15.827553 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:17:15.829692 systemd-logind[1861]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:17:15.832863 systemd-logind[1861]: Removed session 21. Jul 7 00:17:15.848767 systemd[1]: Started sshd@21-172.31.30.121:22-147.75.109.163:54982.service - OpenSSH per-connection server daemon (147.75.109.163:54982). Jul 7 00:17:16.022172 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 54982 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:16.024085 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:16.030435 systemd-logind[1861]: New session 22 of user core. Jul 7 00:17:16.040492 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:17:16.232824 sshd[4893]: Connection closed by 147.75.109.163 port 54982 Jul 7 00:17:16.233462 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:16.239570 systemd[1]: sshd@21-172.31.30.121:22-147.75.109.163:54982.service: Deactivated successfully. Jul 7 00:17:16.242534 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:17:16.245310 systemd-logind[1861]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:17:16.249315 systemd-logind[1861]: Removed session 22. Jul 7 00:17:17.813067 update_engine[1864]: I20250707 00:17:17.812992 1864 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:17:17.813067 update_engine[1864]: I20250707 00:17:17.813059 1864 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:17:17.817133 update_engine[1864]: I20250707 00:17:17.817029 1864 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:17:17.817531 update_engine[1864]: I20250707 00:17:17.817507 1864 omaha_request_params.cc:62] Current group set to beta Jul 7 00:17:17.817696 update_engine[1864]: I20250707 00:17:17.817647 1864 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:17:17.817696 update_engine[1864]: I20250707 00:17:17.817660 1864 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:17:17.817696 update_engine[1864]: I20250707 00:17:17.817682 1864 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:17:17.818045 update_engine[1864]: I20250707 00:17:17.817734 1864 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:17:17.818045 update_engine[1864]: I20250707 00:17:17.817788 1864 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:17:17.818045 update_engine[1864]: I20250707 00:17:17.817795 1864 omaha_request_action.cc:272] Request: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: Jul 7 00:17:17.818045 update_engine[1864]: I20250707 00:17:17.817801 1864 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:17.833229 update_engine[1864]: I20250707 00:17:17.833150 1864 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:17.833640 update_engine[1864]: I20250707 00:17:17.833587 1864 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:17.840639 locksmithd[1928]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:17:17.866723 update_engine[1864]: E20250707 00:17:17.866621 1864 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:17.866845 update_engine[1864]: I20250707 00:17:17.866755 1864 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:17:21.266986 systemd[1]: Started sshd@22-172.31.30.121:22-147.75.109.163:47232.service - OpenSSH per-connection server daemon (147.75.109.163:47232). Jul 7 00:17:21.441608 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 47232 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:21.443108 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:21.449239 systemd-logind[1861]: New session 23 of user core. Jul 7 00:17:21.453475 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:17:21.652752 sshd[4910]: Connection closed by 147.75.109.163 port 47232 Jul 7 00:17:21.653397 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:21.658072 systemd[1]: sshd@22-172.31.30.121:22-147.75.109.163:47232.service: Deactivated successfully. Jul 7 00:17:21.660302 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:17:21.662450 systemd-logind[1861]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:17:21.664472 systemd-logind[1861]: Removed session 23. Jul 7 00:17:26.694233 systemd[1]: Started sshd@23-172.31.30.121:22-147.75.109.163:51044.service - OpenSSH per-connection server daemon (147.75.109.163:51044). Jul 7 00:17:26.898583 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 51044 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:26.900250 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:26.906940 systemd-logind[1861]: New session 24 of user core. Jul 7 00:17:26.917469 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:17:27.143161 sshd[4925]: Connection closed by 147.75.109.163 port 51044 Jul 7 00:17:27.144830 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:27.149581 systemd-logind[1861]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:17:27.149881 systemd[1]: sshd@23-172.31.30.121:22-147.75.109.163:51044.service: Deactivated successfully. Jul 7 00:17:27.152590 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:17:27.154937 systemd-logind[1861]: Removed session 24. Jul 7 00:17:27.764618 update_engine[1864]: I20250707 00:17:27.764533 1864 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:27.765048 update_engine[1864]: I20250707 00:17:27.764796 1864 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:27.765108 update_engine[1864]: I20250707 00:17:27.765077 1864 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:27.778060 update_engine[1864]: E20250707 00:17:27.777746 1864 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:27.778060 update_engine[1864]: I20250707 00:17:27.777848 1864 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:17:32.189294 systemd[1]: Started sshd@24-172.31.30.121:22-147.75.109.163:51056.service - OpenSSH per-connection server daemon (147.75.109.163:51056). Jul 7 00:17:32.362525 sshd[4937]: Accepted publickey for core from 147.75.109.163 port 51056 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:32.364335 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:32.370268 systemd-logind[1861]: New session 25 of user core. Jul 7 00:17:32.380484 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:17:32.567923 sshd[4939]: Connection closed by 147.75.109.163 port 51056 Jul 7 00:17:32.569721 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:32.574970 systemd[1]: sshd@24-172.31.30.121:22-147.75.109.163:51056.service: Deactivated successfully. Jul 7 00:17:32.578007 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:17:32.579861 systemd-logind[1861]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:17:32.581326 systemd-logind[1861]: Removed session 25. Jul 7 00:17:32.602078 systemd[1]: Started sshd@25-172.31.30.121:22-147.75.109.163:51064.service - OpenSSH per-connection server daemon (147.75.109.163:51064). Jul 7 00:17:32.778802 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 51064 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:32.780689 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:32.787126 systemd-logind[1861]: New session 26 of user core. Jul 7 00:17:32.796457 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:17:34.278317 containerd[1908]: time="2025-07-07T00:17:34.278239150Z" level=info msg="StopContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" with timeout 30 (s)" Jul 7 00:17:34.280805 containerd[1908]: time="2025-07-07T00:17:34.280764623Z" level=info msg="Stop container \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" with signal terminated" Jul 7 00:17:34.312427 systemd[1]: cri-containerd-112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef.scope: Deactivated successfully. Jul 7 00:17:34.316511 containerd[1908]: time="2025-07-07T00:17:34.316422094Z" level=info msg="received exit event container_id:\"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" id:\"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" pid:3926 exited_at:{seconds:1751847454 nanos:315910007}" Jul 7 00:17:34.316511 containerd[1908]: time="2025-07-07T00:17:34.316479808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" id:\"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" pid:3926 exited_at:{seconds:1751847454 nanos:315910007}" Jul 7 00:17:34.362018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef-rootfs.mount: Deactivated successfully. Jul 7 00:17:34.371143 containerd[1908]: time="2025-07-07T00:17:34.371100516Z" level=info msg="StopContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" returns successfully" Jul 7 00:17:34.372104 containerd[1908]: time="2025-07-07T00:17:34.372064089Z" level=info msg="StopPodSandbox for \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\"" Jul 7 00:17:34.384957 containerd[1908]: time="2025-07-07T00:17:34.384909492Z" level=info msg="Container to stop \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.394808 containerd[1908]: time="2025-07-07T00:17:34.394512264Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:17:34.401422 systemd[1]: cri-containerd-05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7.scope: Deactivated successfully. Jul 7 00:17:34.406910 containerd[1908]: time="2025-07-07T00:17:34.406782937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" id:\"fa723fd99552cb51b479f6dbda075871652851c8cffbc9edf7623ad90612d525\" pid:4988 exited_at:{seconds:1751847454 nanos:406047758}" Jul 7 00:17:34.408882 containerd[1908]: time="2025-07-07T00:17:34.408768030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" id:\"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" pid:3600 exit_status:137 exited_at:{seconds:1751847454 nanos:408476186}" Jul 7 00:17:34.413747 containerd[1908]: time="2025-07-07T00:17:34.413677623Z" level=info msg="StopContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" with timeout 2 (s)" Jul 7 00:17:34.415172 containerd[1908]: time="2025-07-07T00:17:34.414983334Z" level=info msg="Stop container \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" with signal terminated" Jul 7 00:17:34.427832 systemd-networkd[1813]: lxc_health: Link DOWN Jul 7 00:17:34.428326 systemd-networkd[1813]: lxc_health: Lost carrier Jul 7 00:17:34.458146 systemd[1]: cri-containerd-7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4.scope: Deactivated successfully. Jul 7 00:17:34.458549 systemd[1]: cri-containerd-7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4.scope: Consumed 8.413s CPU time, 236.9M memory peak, 120M read from disk, 13.3M written to disk. Jul 7 00:17:34.465977 containerd[1908]: time="2025-07-07T00:17:34.465411594Z" level=info msg="received exit event container_id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" pid:4041 exited_at:{seconds:1751847454 nanos:464796759}" Jul 7 00:17:34.474446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7-rootfs.mount: Deactivated successfully. Jul 7 00:17:34.498581 containerd[1908]: time="2025-07-07T00:17:34.498535238Z" level=info msg="shim disconnected" id=05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7 namespace=k8s.io Jul 7 00:17:34.498581 containerd[1908]: time="2025-07-07T00:17:34.498580538Z" level=warning msg="cleaning up after shim disconnected" id=05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7 namespace=k8s.io Jul 7 00:17:34.499893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4-rootfs.mount: Deactivated successfully. Jul 7 00:17:34.523976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7-shm.mount: Deactivated successfully. Jul 7 00:17:34.544863 containerd[1908]: time="2025-07-07T00:17:34.498590599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:17:34.546923 containerd[1908]: time="2025-07-07T00:17:34.546439060Z" level=info msg="TearDown network for sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" successfully" Jul 7 00:17:34.546923 containerd[1908]: time="2025-07-07T00:17:34.546477357Z" level=info msg="StopPodSandbox for \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" returns successfully" Jul 7 00:17:34.546923 containerd[1908]: time="2025-07-07T00:17:34.546834047Z" level=info msg="StopContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" returns successfully" Jul 7 00:17:34.547325 containerd[1908]: time="2025-07-07T00:17:34.535916451Z" level=info msg="received exit event sandbox_id:\"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" exit_status:137 exited_at:{seconds:1751847454 nanos:408476186}" Jul 7 00:17:34.548001 containerd[1908]: time="2025-07-07T00:17:34.547975828Z" level=info msg="StopPodSandbox for \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\"" Jul 7 00:17:34.548226 containerd[1908]: time="2025-07-07T00:17:34.548189365Z" level=info msg="Container to stop \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.548332 containerd[1908]: time="2025-07-07T00:17:34.548314126Z" level=info msg="Container to stop \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.548404 containerd[1908]: time="2025-07-07T00:17:34.548388538Z" level=info msg="Container to stop \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.548473 containerd[1908]: time="2025-07-07T00:17:34.548460447Z" level=info msg="Container to stop \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.548541 containerd[1908]: time="2025-07-07T00:17:34.548528039Z" level=info msg="Container to stop \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:17:34.570735 systemd[1]: cri-containerd-563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe.scope: Deactivated successfully. Jul 7 00:17:34.633272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe-rootfs.mount: Deactivated successfully. Jul 7 00:17:34.642933 containerd[1908]: time="2025-07-07T00:17:34.642884609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" id:\"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" pid:4041 exited_at:{seconds:1751847454 nanos:464796759}" Jul 7 00:17:34.643380 containerd[1908]: time="2025-07-07T00:17:34.643157259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" id:\"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" pid:3570 exit_status:137 exited_at:{seconds:1751847454 nanos:574015601}" Jul 7 00:17:34.647214 containerd[1908]: time="2025-07-07T00:17:34.647118766Z" level=info msg="received exit event sandbox_id:\"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" exit_status:137 exited_at:{seconds:1751847454 nanos:574015601}" Jul 7 00:17:34.648223 containerd[1908]: time="2025-07-07T00:17:34.647326820Z" level=info msg="shim disconnected" id=563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe namespace=k8s.io Jul 7 00:17:34.648223 containerd[1908]: time="2025-07-07T00:17:34.648138199Z" level=warning msg="cleaning up after shim disconnected" id=563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe namespace=k8s.io Jul 7 00:17:34.648321 containerd[1908]: time="2025-07-07T00:17:34.648154787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:17:34.648750 containerd[1908]: time="2025-07-07T00:17:34.648127939Z" level=info msg="TearDown network for sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" successfully" Jul 7 00:17:34.648750 containerd[1908]: time="2025-07-07T00:17:34.648749896Z" level=info msg="StopPodSandbox for \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" returns successfully" Jul 7 00:17:34.734311 kubelet[3220]: I0707 00:17:34.734151 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsjsz\" (UniqueName: \"kubernetes.io/projected/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-kube-api-access-vsjsz\") pod \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\" (UID: \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\") " Jul 7 00:17:34.734311 kubelet[3220]: I0707 00:17:34.734234 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-cilium-config-path\") pod \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\" (UID: \"230ef0fe-2300-42a0-afe2-ab9c0a1d8586\") " Jul 7 00:17:34.736394 kubelet[3220]: I0707 00:17:34.736354 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "230ef0fe-2300-42a0-afe2-ab9c0a1d8586" (UID: "230ef0fe-2300-42a0-afe2-ab9c0a1d8586"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:17:34.740536 kubelet[3220]: I0707 00:17:34.740484 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-kube-api-access-vsjsz" (OuterVolumeSpecName: "kube-api-access-vsjsz") pod "230ef0fe-2300-42a0-afe2-ab9c0a1d8586" (UID: "230ef0fe-2300-42a0-afe2-ab9c0a1d8586"). InnerVolumeSpecName "kube-api-access-vsjsz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:17:34.835437 kubelet[3220]: I0707 00:17:34.835353 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-lib-modules\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835533 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qfn\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-kube-api-access-l9qfn\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835745 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-run\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835776 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52118ea3-37d9-4b63-82be-dfd4b040e547-clustermesh-secrets\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835794 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-kernel\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835809 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-net\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.835851 kubelet[3220]: I0707 00:17:34.835825 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cni-path\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835841 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-hubble-tls\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835862 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-config-path\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835879 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-xtables-lock\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835894 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-cgroup\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835909 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-hostproc\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836058 kubelet[3220]: I0707 00:17:34.835923 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-etc-cni-netd\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836234 kubelet[3220]: I0707 00:17:34.835940 3220 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-bpf-maps\") pod \"52118ea3-37d9-4b63-82be-dfd4b040e547\" (UID: \"52118ea3-37d9-4b63-82be-dfd4b040e547\") " Jul 7 00:17:34.836234 kubelet[3220]: I0707 00:17:34.835989 3220 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-cilium-config-path\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.836234 kubelet[3220]: I0707 00:17:34.836001 3220 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vsjsz\" (UniqueName: \"kubernetes.io/projected/230ef0fe-2300-42a0-afe2-ab9c0a1d8586-kube-api-access-vsjsz\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.836234 kubelet[3220]: I0707 00:17:34.836052 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.836234 kubelet[3220]: I0707 00:17:34.836088 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.839234 kubelet[3220]: I0707 00:17:34.839006 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-kube-api-access-l9qfn" (OuterVolumeSpecName: "kube-api-access-l9qfn") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "kube-api-access-l9qfn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:17:34.839234 kubelet[3220]: I0707 00:17:34.839079 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:17:34.839234 kubelet[3220]: I0707 00:17:34.839119 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841265 kubelet[3220]: I0707 00:17:34.841173 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:17:34.841400 kubelet[3220]: I0707 00:17:34.841386 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841461 kubelet[3220]: I0707 00:17:34.841452 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841521 kubelet[3220]: I0707 00:17:34.841512 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-hostproc" (OuterVolumeSpecName: "hostproc") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841592 kubelet[3220]: I0707 00:17:34.841511 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52118ea3-37d9-4b63-82be-dfd4b040e547-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:17:34.841592 kubelet[3220]: I0707 00:17:34.841533 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841592 kubelet[3220]: I0707 00:17:34.841543 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841592 kubelet[3220]: I0707 00:17:34.841553 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cni-path" (OuterVolumeSpecName: "cni-path") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.841592 kubelet[3220]: I0707 00:17:34.841566 3220 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52118ea3-37d9-4b63-82be-dfd4b040e547" (UID: "52118ea3-37d9-4b63-82be-dfd4b040e547"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:17:34.936694 kubelet[3220]: I0707 00:17:34.936653 3220 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-run\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.936694 kubelet[3220]: I0707 00:17:34.936690 3220 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52118ea3-37d9-4b63-82be-dfd4b040e547-clustermesh-secrets\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936705 3220 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-net\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936735 3220 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-host-proc-sys-kernel\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936743 3220 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-config-path\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936753 3220 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cni-path\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936761 3220 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-hubble-tls\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936769 3220 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-cilium-cgroup\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936777 3220 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-xtables-lock\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937043 kubelet[3220]: I0707 00:17:34.936784 3220 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-etc-cni-netd\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937280 kubelet[3220]: I0707 00:17:34.936792 3220 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-bpf-maps\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937280 kubelet[3220]: I0707 00:17:34.936800 3220 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-hostproc\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937280 kubelet[3220]: I0707 00:17:34.936808 3220 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52118ea3-37d9-4b63-82be-dfd4b040e547-lib-modules\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:34.937280 kubelet[3220]: I0707 00:17:34.936817 3220 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9qfn\" (UniqueName: \"kubernetes.io/projected/52118ea3-37d9-4b63-82be-dfd4b040e547-kube-api-access-l9qfn\") on node \"ip-172-31-30-121\" DevicePath \"\"" Jul 7 00:17:35.073067 kubelet[3220]: I0707 00:17:35.072800 3220 scope.go:117] "RemoveContainer" containerID="112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef" Jul 7 00:17:35.078687 containerd[1908]: time="2025-07-07T00:17:35.077406087Z" level=info msg="RemoveContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\"" Jul 7 00:17:35.079758 systemd[1]: Removed slice kubepods-besteffort-pod230ef0fe_2300_42a0_afe2_ab9c0a1d8586.slice - libcontainer container kubepods-besteffort-pod230ef0fe_2300_42a0_afe2_ab9c0a1d8586.slice. Jul 7 00:17:35.096298 systemd[1]: Removed slice kubepods-burstable-pod52118ea3_37d9_4b63_82be_dfd4b040e547.slice - libcontainer container kubepods-burstable-pod52118ea3_37d9_4b63_82be_dfd4b040e547.slice. Jul 7 00:17:35.096469 systemd[1]: kubepods-burstable-pod52118ea3_37d9_4b63_82be_dfd4b040e547.slice: Consumed 8.540s CPU time, 237.2M memory peak, 121.4M read from disk, 13.3M written to disk. Jul 7 00:17:35.110966 containerd[1908]: time="2025-07-07T00:17:35.110911502Z" level=info msg="RemoveContainer for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" returns successfully" Jul 7 00:17:35.115424 kubelet[3220]: I0707 00:17:35.115227 3220 scope.go:117] "RemoveContainer" containerID="112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef" Jul 7 00:17:35.123261 containerd[1908]: time="2025-07-07T00:17:35.117834426Z" level=error msg="ContainerStatus for \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\": not found" Jul 7 00:17:35.126848 kubelet[3220]: E0707 00:17:35.126786 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\": not found" containerID="112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef" Jul 7 00:17:35.127013 kubelet[3220]: I0707 00:17:35.126850 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef"} err="failed to get container status \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\": rpc error: code = NotFound desc = an error occurred when try to find container \"112e5bb3339460450331f12959ea7c15c209a99a53c3b2a8b424b379adbfdbef\": not found" Jul 7 00:17:35.127013 kubelet[3220]: I0707 00:17:35.126957 3220 scope.go:117] "RemoveContainer" containerID="7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4" Jul 7 00:17:35.134681 containerd[1908]: time="2025-07-07T00:17:35.134617638Z" level=info msg="RemoveContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\"" Jul 7 00:17:35.142163 containerd[1908]: time="2025-07-07T00:17:35.142105779Z" level=info msg="RemoveContainer for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" returns successfully" Jul 7 00:17:35.142474 kubelet[3220]: I0707 00:17:35.142443 3220 scope.go:117] "RemoveContainer" containerID="82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d" Jul 7 00:17:35.144308 containerd[1908]: time="2025-07-07T00:17:35.144252588Z" level=info msg="RemoveContainer for \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\"" Jul 7 00:17:35.152139 containerd[1908]: time="2025-07-07T00:17:35.152071854Z" level=info msg="RemoveContainer for \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" returns successfully" Jul 7 00:17:35.152423 kubelet[3220]: I0707 00:17:35.152392 3220 scope.go:117] "RemoveContainer" containerID="f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3" Jul 7 00:17:35.155347 containerd[1908]: time="2025-07-07T00:17:35.155301642Z" level=info msg="RemoveContainer for \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\"" Jul 7 00:17:35.162045 containerd[1908]: time="2025-07-07T00:17:35.162002354Z" level=info msg="RemoveContainer for \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" returns successfully" Jul 7 00:17:35.162330 kubelet[3220]: I0707 00:17:35.162294 3220 scope.go:117] "RemoveContainer" containerID="13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9" Jul 7 00:17:35.164983 containerd[1908]: time="2025-07-07T00:17:35.164462199Z" level=info msg="RemoveContainer for \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\"" Jul 7 00:17:35.170461 containerd[1908]: time="2025-07-07T00:17:35.170401296Z" level=info msg="RemoveContainer for \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" returns successfully" Jul 7 00:17:35.170863 kubelet[3220]: I0707 00:17:35.170832 3220 scope.go:117] "RemoveContainer" containerID="6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726" Jul 7 00:17:35.172912 containerd[1908]: time="2025-07-07T00:17:35.172821739Z" level=info msg="RemoveContainer for \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\"" Jul 7 00:17:35.178503 containerd[1908]: time="2025-07-07T00:17:35.178412807Z" level=info msg="RemoveContainer for \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" returns successfully" Jul 7 00:17:35.178815 kubelet[3220]: I0707 00:17:35.178783 3220 scope.go:117] "RemoveContainer" containerID="7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4" Jul 7 00:17:35.179187 containerd[1908]: time="2025-07-07T00:17:35.179140434Z" level=error msg="ContainerStatus for \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\": not found" Jul 7 00:17:35.179392 kubelet[3220]: E0707 00:17:35.179342 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\": not found" containerID="7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4" Jul 7 00:17:35.179524 kubelet[3220]: I0707 00:17:35.179474 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4"} err="failed to get container status \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7341c67eac1cb926d3da0dc3c673fc64a243e2619e5418c84ef3269540811db4\": not found" Jul 7 00:17:35.179524 kubelet[3220]: I0707 00:17:35.179504 3220 scope.go:117] "RemoveContainer" containerID="82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d" Jul 7 00:17:35.179992 containerd[1908]: time="2025-07-07T00:17:35.179962554Z" level=error msg="ContainerStatus for \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\": not found" Jul 7 00:17:35.180148 kubelet[3220]: E0707 00:17:35.180095 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\": not found" containerID="82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d" Jul 7 00:17:35.180238 kubelet[3220]: I0707 00:17:35.180152 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d"} err="failed to get container status \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\": rpc error: code = NotFound desc = an error occurred when try to find container \"82a302d48d389031ddf14df7c3b79fdf48fc14f0cc8f6ab19270931c3ae5a59d\": not found" Jul 7 00:17:35.180238 kubelet[3220]: I0707 00:17:35.180176 3220 scope.go:117] "RemoveContainer" containerID="f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3" Jul 7 00:17:35.180452 containerd[1908]: time="2025-07-07T00:17:35.180421011Z" level=error msg="ContainerStatus for \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\": not found" Jul 7 00:17:35.180555 kubelet[3220]: E0707 00:17:35.180534 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\": not found" containerID="f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3" Jul 7 00:17:35.180606 kubelet[3220]: I0707 00:17:35.180559 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3"} err="failed to get container status \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6750736a163e3261752ab1103298dd61822126f05444e87f09592f5cd7622c3\": not found" Jul 7 00:17:35.180606 kubelet[3220]: I0707 00:17:35.180577 3220 scope.go:117] "RemoveContainer" containerID="13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9" Jul 7 00:17:35.180838 containerd[1908]: time="2025-07-07T00:17:35.180759299Z" level=error msg="ContainerStatus for \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\": not found" Jul 7 00:17:35.180979 kubelet[3220]: E0707 00:17:35.180950 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\": not found" containerID="13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9" Jul 7 00:17:35.181043 kubelet[3220]: I0707 00:17:35.180979 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9"} err="failed to get container status \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"13eeae4264aca92d2c38d21daeb25580c0555a4cc9fc96778f8276d0d0d902b9\": not found" Jul 7 00:17:35.181043 kubelet[3220]: I0707 00:17:35.180993 3220 scope.go:117] "RemoveContainer" containerID="6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726" Jul 7 00:17:35.181269 containerd[1908]: time="2025-07-07T00:17:35.181241356Z" level=error msg="ContainerStatus for \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\": not found" Jul 7 00:17:35.181390 kubelet[3220]: E0707 00:17:35.181369 3220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\": not found" containerID="6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726" Jul 7 00:17:35.181447 kubelet[3220]: I0707 00:17:35.181410 3220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726"} err="failed to get container status \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\": rpc error: code = NotFound desc = an error occurred when try to find container \"6202229b6554f854655a2b7c6329dc68e1a5be686a4a19abf21afed751d64726\": not found" Jul 7 00:17:35.359180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe-shm.mount: Deactivated successfully. Jul 7 00:17:35.359930 systemd[1]: var-lib-kubelet-pods-230ef0fe\x2d2300\x2d42a0\x2dafe2\x2dab9c0a1d8586-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvsjsz.mount: Deactivated successfully. Jul 7 00:17:35.360009 systemd[1]: var-lib-kubelet-pods-52118ea3\x2d37d9\x2d4b63\x2d82be\x2ddfd4b040e547-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9qfn.mount: Deactivated successfully. Jul 7 00:17:35.360077 systemd[1]: var-lib-kubelet-pods-52118ea3\x2d37d9\x2d4b63\x2d82be\x2ddfd4b040e547-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:17:35.360135 systemd[1]: var-lib-kubelet-pods-52118ea3\x2d37d9\x2d4b63\x2d82be\x2ddfd4b040e547-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:17:35.713886 kubelet[3220]: E0707 00:17:35.713762 3220 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:17:36.221336 sshd[4953]: Connection closed by 147.75.109.163 port 51064 Jul 7 00:17:36.224003 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:36.228308 systemd[1]: sshd@25-172.31.30.121:22-147.75.109.163:51064.service: Deactivated successfully. Jul 7 00:17:36.230932 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:17:36.234672 systemd-logind[1861]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:17:36.236432 systemd-logind[1861]: Removed session 26. Jul 7 00:17:36.256099 systemd[1]: Started sshd@26-172.31.30.121:22-147.75.109.163:56718.service - OpenSSH per-connection server daemon (147.75.109.163:56718). Jul 7 00:17:36.449559 sshd[5108]: Accepted publickey for core from 147.75.109.163 port 56718 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:36.451246 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:36.459846 systemd-logind[1861]: New session 27 of user core. Jul 7 00:17:36.463424 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:17:36.574792 kubelet[3220]: I0707 00:17:36.574710 3220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230ef0fe-2300-42a0-afe2-ab9c0a1d8586" path="/var/lib/kubelet/pods/230ef0fe-2300-42a0-afe2-ab9c0a1d8586/volumes" Jul 7 00:17:36.576770 kubelet[3220]: I0707 00:17:36.575249 3220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52118ea3-37d9-4b63-82be-dfd4b040e547" path="/var/lib/kubelet/pods/52118ea3-37d9-4b63-82be-dfd4b040e547/volumes" Jul 7 00:17:37.088419 ntpd[1851]: Deleting interface #11 lxc_health, fe80::f4a2:37ff:fe9c:79a5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jul 7 00:17:37.088927 ntpd[1851]: 7 Jul 00:17:37 ntpd[1851]: Deleting interface #11 lxc_health, fe80::f4a2:37ff:fe9c:79a5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jul 7 00:17:37.177228 sshd[5110]: Connection closed by 147.75.109.163 port 56718 Jul 7 00:17:37.178730 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:37.187268 systemd-logind[1861]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:17:37.190625 systemd[1]: sshd@26-172.31.30.121:22-147.75.109.163:56718.service: Deactivated successfully. Jul 7 00:17:37.196768 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:17:37.219494 systemd-logind[1861]: Removed session 27. Jul 7 00:17:37.223584 systemd[1]: Started sshd@27-172.31.30.121:22-147.75.109.163:56722.service - OpenSSH per-connection server daemon (147.75.109.163:56722). Jul 7 00:17:37.236443 kubelet[3220]: I0707 00:17:37.236290 3220 memory_manager.go:355] "RemoveStaleState removing state" podUID="230ef0fe-2300-42a0-afe2-ab9c0a1d8586" containerName="cilium-operator" Jul 7 00:17:37.236443 kubelet[3220]: I0707 00:17:37.236354 3220 memory_manager.go:355] "RemoveStaleState removing state" podUID="52118ea3-37d9-4b63-82be-dfd4b040e547" containerName="cilium-agent" Jul 7 00:17:37.257259 systemd[1]: Created slice kubepods-burstable-pode29d0f16_8b3d_4562_9fdd_e38b06b85914.slice - libcontainer container kubepods-burstable-pode29d0f16_8b3d_4562_9fdd_e38b06b85914.slice. Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353600 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e29d0f16-8b3d-4562-9fdd-e38b06b85914-cilium-config-path\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353645 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-host-proc-sys-net\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353669 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-hostproc\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353687 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-host-proc-sys-kernel\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353705 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-cilium-run\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354056 kubelet[3220]: I0707 00:17:37.353722 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-cilium-cgroup\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353738 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-etc-cni-netd\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353756 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e29d0f16-8b3d-4562-9fdd-e38b06b85914-hubble-tls\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353772 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsxf\" (UniqueName: \"kubernetes.io/projected/e29d0f16-8b3d-4562-9fdd-e38b06b85914-kube-api-access-2dsxf\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353792 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-lib-modules\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353808 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-xtables-lock\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354465 kubelet[3220]: I0707 00:17:37.353842 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-bpf-maps\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354626 kubelet[3220]: I0707 00:17:37.353877 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e29d0f16-8b3d-4562-9fdd-e38b06b85914-cni-path\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354626 kubelet[3220]: I0707 00:17:37.353895 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e29d0f16-8b3d-4562-9fdd-e38b06b85914-cilium-ipsec-secrets\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.354626 kubelet[3220]: I0707 00:17:37.353912 3220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e29d0f16-8b3d-4562-9fdd-e38b06b85914-clustermesh-secrets\") pod \"cilium-zvsc8\" (UID: \"e29d0f16-8b3d-4562-9fdd-e38b06b85914\") " pod="kube-system/cilium-zvsc8" Jul 7 00:17:37.425106 sshd[5121]: Accepted publickey for core from 147.75.109.163 port 56722 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:37.426175 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:37.432505 systemd-logind[1861]: New session 28 of user core. Jul 7 00:17:37.440467 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:17:37.556651 sshd[5123]: Connection closed by 147.75.109.163 port 56722 Jul 7 00:17:37.558486 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:37.564069 systemd[1]: sshd@27-172.31.30.121:22-147.75.109.163:56722.service: Deactivated successfully. Jul 7 00:17:37.566023 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:17:37.567913 systemd-logind[1861]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:17:37.569052 containerd[1908]: time="2025-07-07T00:17:37.568649423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvsc8,Uid:e29d0f16-8b3d-4562-9fdd-e38b06b85914,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:37.569989 systemd-logind[1861]: Removed session 28. Jul 7 00:17:37.601098 systemd[1]: Started sshd@28-172.31.30.121:22-147.75.109.163:56732.service - OpenSSH per-connection server daemon (147.75.109.163:56732). Jul 7 00:17:37.610347 containerd[1908]: time="2025-07-07T00:17:37.610141333Z" level=info msg="connecting to shim 3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:37.644666 systemd[1]: Started cri-containerd-3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394.scope - libcontainer container 3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394. Jul 7 00:17:37.677874 containerd[1908]: time="2025-07-07T00:17:37.677821946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvsc8,Uid:e29d0f16-8b3d-4562-9fdd-e38b06b85914,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\"" Jul 7 00:17:37.685780 containerd[1908]: time="2025-07-07T00:17:37.685721705Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:17:37.698333 containerd[1908]: time="2025-07-07T00:17:37.697646719Z" level=info msg="Container fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:37.710171 containerd[1908]: time="2025-07-07T00:17:37.710113622Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\"" Jul 7 00:17:37.712818 containerd[1908]: time="2025-07-07T00:17:37.712770318Z" level=info msg="StartContainer for \"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\"" Jul 7 00:17:37.714165 containerd[1908]: time="2025-07-07T00:17:37.714062447Z" level=info msg="connecting to shim fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" protocol=ttrpc version=3 Jul 7 00:17:37.738547 systemd[1]: Started cri-containerd-fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2.scope - libcontainer container fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2. Jul 7 00:17:37.764320 update_engine[1864]: I20250707 00:17:37.764271 1864 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:37.765889 update_engine[1864]: I20250707 00:17:37.765511 1864 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:37.765889 update_engine[1864]: I20250707 00:17:37.765836 1864 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:37.776962 update_engine[1864]: E20250707 00:17:37.776700 1864 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:37.776962 update_engine[1864]: I20250707 00:17:37.776794 1864 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:17:37.789544 containerd[1908]: time="2025-07-07T00:17:37.789492446Z" level=info msg="StartContainer for \"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\" returns successfully" Jul 7 00:17:37.792441 sshd[5139]: Accepted publickey for core from 147.75.109.163 port 56732 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:37.794186 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:37.802421 systemd-logind[1861]: New session 29 of user core. Jul 7 00:17:37.808767 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 00:17:37.809369 systemd[1]: cri-containerd-fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2.scope: Deactivated successfully. Jul 7 00:17:37.810356 systemd[1]: cri-containerd-fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2.scope: Consumed 25ms CPU time, 9.6M memory peak, 3M read from disk. Jul 7 00:17:37.810738 containerd[1908]: time="2025-07-07T00:17:37.810560254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\" id:\"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\" pid:5195 exited_at:{seconds:1751847457 nanos:808882531}" Jul 7 00:17:37.811281 containerd[1908]: time="2025-07-07T00:17:37.810909481Z" level=info msg="received exit event container_id:\"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\" id:\"fe69f8b2829026b9a0a96878e7d1adb1c9be2b8b763509f91c5745b53a1c9bb2\" pid:5195 exited_at:{seconds:1751847457 nanos:808882531}" Jul 7 00:17:38.122214 containerd[1908]: time="2025-07-07T00:17:38.122069665Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:17:38.139902 containerd[1908]: time="2025-07-07T00:17:38.139849967Z" level=info msg="Container db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:38.152497 containerd[1908]: time="2025-07-07T00:17:38.152440154Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\"" Jul 7 00:17:38.153412 containerd[1908]: time="2025-07-07T00:17:38.153373258Z" level=info msg="StartContainer for \"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\"" Jul 7 00:17:38.157936 containerd[1908]: time="2025-07-07T00:17:38.157885037Z" level=info msg="connecting to shim db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" protocol=ttrpc version=3 Jul 7 00:17:38.184499 systemd[1]: Started cri-containerd-db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0.scope - libcontainer container db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0. Jul 7 00:17:38.223955 containerd[1908]: time="2025-07-07T00:17:38.223922431Z" level=info msg="StartContainer for \"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\" returns successfully" Jul 7 00:17:38.235392 systemd[1]: cri-containerd-db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0.scope: Deactivated successfully. Jul 7 00:17:38.236285 containerd[1908]: time="2025-07-07T00:17:38.236139167Z" level=info msg="received exit event container_id:\"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\" id:\"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\" pid:5248 exited_at:{seconds:1751847458 nanos:235825357}" Jul 7 00:17:38.237110 containerd[1908]: time="2025-07-07T00:17:38.236540177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\" id:\"db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0\" pid:5248 exited_at:{seconds:1751847458 nanos:235825357}" Jul 7 00:17:38.236635 systemd[1]: cri-containerd-db68ae04434cecbeeb47b9f95e76a2f07e4a26d25bab92df5d793a98a131e3c0.scope: Consumed 23ms CPU time, 7.6M memory peak, 2.2M read from disk. Jul 7 00:17:39.116347 containerd[1908]: time="2025-07-07T00:17:39.115816086Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:17:39.135314 containerd[1908]: time="2025-07-07T00:17:39.133333829Z" level=info msg="Container 972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:39.154933 containerd[1908]: time="2025-07-07T00:17:39.154871798Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\"" Jul 7 00:17:39.156688 containerd[1908]: time="2025-07-07T00:17:39.156602785Z" level=info msg="StartContainer for \"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\"" Jul 7 00:17:39.158878 containerd[1908]: time="2025-07-07T00:17:39.158837697Z" level=info msg="connecting to shim 972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" protocol=ttrpc version=3 Jul 7 00:17:39.189608 systemd[1]: Started cri-containerd-972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93.scope - libcontainer container 972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93. Jul 7 00:17:39.249419 containerd[1908]: time="2025-07-07T00:17:39.249369106Z" level=info msg="StartContainer for \"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\" returns successfully" Jul 7 00:17:39.260345 systemd[1]: cri-containerd-972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93.scope: Deactivated successfully. Jul 7 00:17:39.261011 systemd[1]: cri-containerd-972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93.scope: Consumed 28ms CPU time, 5.8M memory peak, 1.1M read from disk. Jul 7 00:17:39.261834 containerd[1908]: time="2025-07-07T00:17:39.261799341Z" level=info msg="received exit event container_id:\"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\" id:\"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\" pid:5296 exited_at:{seconds:1751847459 nanos:261007942}" Jul 7 00:17:39.262820 containerd[1908]: time="2025-07-07T00:17:39.262757640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\" id:\"972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93\" pid:5296 exited_at:{seconds:1751847459 nanos:261007942}" Jul 7 00:17:39.296401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-972c0a03209ac332e201696974f6585a133e98a1cf4edb79e550d1e4a923de93-rootfs.mount: Deactivated successfully. Jul 7 00:17:40.126577 containerd[1908]: time="2025-07-07T00:17:40.126403331Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:17:40.156543 containerd[1908]: time="2025-07-07T00:17:40.156471820Z" level=info msg="Container bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:40.168786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725042063.mount: Deactivated successfully. Jul 7 00:17:40.188215 containerd[1908]: time="2025-07-07T00:17:40.188112018Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\"" Jul 7 00:17:40.190602 containerd[1908]: time="2025-07-07T00:17:40.190504872Z" level=info msg="StartContainer for \"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\"" Jul 7 00:17:40.194434 containerd[1908]: time="2025-07-07T00:17:40.194257021Z" level=info msg="connecting to shim bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" protocol=ttrpc version=3 Jul 7 00:17:40.239465 systemd[1]: Started cri-containerd-bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de.scope - libcontainer container bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de. Jul 7 00:17:40.361420 systemd[1]: cri-containerd-bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de.scope: Deactivated successfully. Jul 7 00:17:40.364854 containerd[1908]: time="2025-07-07T00:17:40.364506712Z" level=info msg="received exit event container_id:\"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\" id:\"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\" pid:5338 exited_at:{seconds:1751847460 nanos:362355348}" Jul 7 00:17:40.365956 containerd[1908]: time="2025-07-07T00:17:40.365902626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\" id:\"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\" pid:5338 exited_at:{seconds:1751847460 nanos:362355348}" Jul 7 00:17:40.379930 containerd[1908]: time="2025-07-07T00:17:40.379796414Z" level=info msg="StartContainer for \"bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de\" returns successfully" Jul 7 00:17:40.401315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb76907b2d446d3945863a68a440c1ca497b9826a722cafda8df670fda3f5de-rootfs.mount: Deactivated successfully. Jul 7 00:17:40.572872 kubelet[3220]: E0707 00:17:40.571392 3220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qx7gx" podUID="f190a489-92fc-4bad-a9b6-52324e33ca3a" Jul 7 00:17:40.714801 kubelet[3220]: E0707 00:17:40.714670 3220 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:17:41.129034 containerd[1908]: time="2025-07-07T00:17:41.128907046Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:17:41.149104 containerd[1908]: time="2025-07-07T00:17:41.149008946Z" level=info msg="Container 6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:41.168045 containerd[1908]: time="2025-07-07T00:17:41.167963676Z" level=info msg="CreateContainer within sandbox \"3f6b733d1dbcf9d0f3c38e57e80cf0f5ab6bb4b3438423451365ad167ceca394\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\"" Jul 7 00:17:41.168896 containerd[1908]: time="2025-07-07T00:17:41.168668780Z" level=info msg="StartContainer for \"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\"" Jul 7 00:17:41.170111 containerd[1908]: time="2025-07-07T00:17:41.170061578Z" level=info msg="connecting to shim 6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea" address="unix:///run/containerd/s/f2ca9e4ee6b23e59716d9fbcad4d770e8fffdf0a90ca01cc1ad9cee797cc3073" protocol=ttrpc version=3 Jul 7 00:17:41.199714 systemd[1]: Started cri-containerd-6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea.scope - libcontainer container 6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea. Jul 7 00:17:41.253180 containerd[1908]: time="2025-07-07T00:17:41.253134346Z" level=info msg="StartContainer for \"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" returns successfully" Jul 7 00:17:41.403824 containerd[1908]: time="2025-07-07T00:17:41.403674374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"8efab9a3605d258ab4191d65ddd9e14c441344abb5a772cbb5fbf4f6a1b2c4fe\" pid:5407 exited_at:{seconds:1751847461 nanos:403297559}" Jul 7 00:17:41.991249 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:17:42.509394 containerd[1908]: time="2025-07-07T00:17:42.509347635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"bd4b6f63c4b6a55a006e7c345c2ebf01dbdfa5c58839dd877ca32c3f8dc3266a\" pid:5482 exit_status:1 exited_at:{seconds:1751847462 nanos:508444363}" Jul 7 00:17:42.571233 kubelet[3220]: E0707 00:17:42.571147 3220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qx7gx" podUID="f190a489-92fc-4bad-a9b6-52324e33ca3a" Jul 7 00:17:43.270993 kubelet[3220]: I0707 00:17:43.270934 3220 setters.go:602] "Node became not ready" node="ip-172-31-30-121" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:17:43Z","lastTransitionTime":"2025-07-07T00:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:17:44.574405 kubelet[3220]: E0707 00:17:44.574354 3220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qx7gx" podUID="f190a489-92fc-4bad-a9b6-52324e33ca3a" Jul 7 00:17:44.715534 containerd[1908]: time="2025-07-07T00:17:44.715475647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"69c2351cd225536356f8af4aa9309023ed799b66c1e860c249bbb11e83eb7df3\" pid:5792 exit_status:1 exited_at:{seconds:1751847464 nanos:713520662}" Jul 7 00:17:45.206005 (udev-worker)[5917]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:17:45.211721 (udev-worker)[5920]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:17:45.213846 systemd-networkd[1813]: lxc_health: Link UP Jul 7 00:17:45.243576 systemd-networkd[1813]: lxc_health: Gained carrier Jul 7 00:17:45.571253 kubelet[3220]: E0707 00:17:45.571172 3220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-drm2l" podUID="95043214-8bcf-46eb-a8aa-5f4babbeeb72" Jul 7 00:17:45.608487 kubelet[3220]: I0707 00:17:45.608415 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zvsc8" podStartSLOduration=8.608394163 podStartE2EDuration="8.608394163s" podCreationTimestamp="2025-07-07 00:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:42.194666634 +0000 UTC m=+111.881522369" watchObservedRunningTime="2025-07-07 00:17:45.608394163 +0000 UTC m=+115.295249896" Jul 7 00:17:46.843387 systemd-networkd[1813]: lxc_health: Gained IPv6LL Jul 7 00:17:47.001959 containerd[1908]: time="2025-07-07T00:17:47.001696898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"17d355750611fb25141fa2f0e75e479027bbd8808656da4f9e02d825cdd09f8b\" pid:5955 exited_at:{seconds:1751847466 nanos:999262967}" Jul 7 00:17:47.771088 update_engine[1864]: I20250707 00:17:47.770263 1864 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:47.771088 update_engine[1864]: I20250707 00:17:47.770604 1864 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:47.771823 update_engine[1864]: I20250707 00:17:47.771787 1864 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:47.772317 update_engine[1864]: E20250707 00:17:47.772270 1864 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:47.772787 update_engine[1864]: I20250707 00:17:47.772563 1864 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:17:47.772787 update_engine[1864]: I20250707 00:17:47.772585 1864 omaha_request_action.cc:617] Omaha request response: Jul 7 00:17:47.772787 update_engine[1864]: E20250707 00:17:47.772684 1864 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775637 1864 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775678 1864 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775688 1864 update_attempter.cc:306] Processing Done. Jul 7 00:17:47.776531 update_engine[1864]: E20250707 00:17:47.775710 1864 update_attempter.cc:619] Update failed. Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775723 1864 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775732 1864 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775741 1864 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775857 1864 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775895 1864 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775904 1864 omaha_request_action.cc:272] Request: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.775913 1864 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.776135 1864 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:47.776531 update_engine[1864]: I20250707 00:17:47.776468 1864 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:47.777669 locksmithd[1928]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:17:47.780353 update_engine[1864]: E20250707 00:17:47.780057 1864 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780143 1864 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780157 1864 omaha_request_action.cc:617] Omaha request response: Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780169 1864 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780176 1864 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780185 1864 update_attempter.cc:306] Processing Done. Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780195 1864 update_attempter.cc:310] Error event sent. Jul 7 00:17:47.780353 update_engine[1864]: I20250707 00:17:47.780226 1864 update_check_scheduler.cc:74] Next update check in 45m3s Jul 7 00:17:47.780725 locksmithd[1928]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:17:49.088443 ntpd[1851]: Listen normally on 14 lxc_health [fe80::f0a2:b3ff:feb9:d00f%14]:123 Jul 7 00:17:49.089896 ntpd[1851]: 7 Jul 00:17:49 ntpd[1851]: Listen normally on 14 lxc_health [fe80::f0a2:b3ff:feb9:d00f%14]:123 Jul 7 00:17:49.374006 containerd[1908]: time="2025-07-07T00:17:49.373565263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"c436bb88ac838a5f80f5ceecf3135f55d2aa7d4cb51eab7bfdbd6bc59721702a\" pid:5986 exited_at:{seconds:1751847469 nanos:372409481}" Jul 7 00:17:49.378852 kubelet[3220]: E0707 00:17:49.378808 3220 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56478->127.0.0.1:45463: write tcp 127.0.0.1:56478->127.0.0.1:45463: write: broken pipe Jul 7 00:17:50.586983 containerd[1908]: time="2025-07-07T00:17:50.586947323Z" level=info msg="StopPodSandbox for \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\"" Jul 7 00:17:50.587903 containerd[1908]: time="2025-07-07T00:17:50.587858899Z" level=info msg="TearDown network for sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" successfully" Jul 7 00:17:50.587903 containerd[1908]: time="2025-07-07T00:17:50.587895381Z" level=info msg="StopPodSandbox for \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" returns successfully" Jul 7 00:17:50.588534 containerd[1908]: time="2025-07-07T00:17:50.588484662Z" level=info msg="RemovePodSandbox for \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\"" Jul 7 00:17:50.588534 containerd[1908]: time="2025-07-07T00:17:50.588517430Z" level=info msg="Forcibly stopping sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\"" Jul 7 00:17:50.588726 containerd[1908]: time="2025-07-07T00:17:50.588636394Z" level=info msg="TearDown network for sandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" successfully" Jul 7 00:17:50.593729 containerd[1908]: time="2025-07-07T00:17:50.593687870Z" level=info msg="Ensure that sandbox 563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe in task-service has been cleanup successfully" Jul 7 00:17:50.600853 containerd[1908]: time="2025-07-07T00:17:50.600781216Z" level=info msg="RemovePodSandbox \"563f2fd9b1ec9ef2db406e73d4e43ff08337021ba975c9f0ace9c3259e5d1dfe\" returns successfully" Jul 7 00:17:50.601696 containerd[1908]: time="2025-07-07T00:17:50.601448627Z" level=info msg="StopPodSandbox for \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\"" Jul 7 00:17:50.601696 containerd[1908]: time="2025-07-07T00:17:50.601620405Z" level=info msg="TearDown network for sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" successfully" Jul 7 00:17:50.601696 containerd[1908]: time="2025-07-07T00:17:50.601634510Z" level=info msg="StopPodSandbox for \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" returns successfully" Jul 7 00:17:50.602114 containerd[1908]: time="2025-07-07T00:17:50.602090534Z" level=info msg="RemovePodSandbox for \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\"" Jul 7 00:17:50.602158 containerd[1908]: time="2025-07-07T00:17:50.602119028Z" level=info msg="Forcibly stopping sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\"" Jul 7 00:17:50.602326 containerd[1908]: time="2025-07-07T00:17:50.602296385Z" level=info msg="TearDown network for sandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" successfully" Jul 7 00:17:50.603611 containerd[1908]: time="2025-07-07T00:17:50.603581986Z" level=info msg="Ensure that sandbox 05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7 in task-service has been cleanup successfully" Jul 7 00:17:50.609890 containerd[1908]: time="2025-07-07T00:17:50.609843058Z" level=info msg="RemovePodSandbox \"05ad84ad2668e01a93938cd21d8f4f46219568fa2b1bca772a0acf7af850f4f7\" returns successfully" Jul 7 00:17:51.525547 containerd[1908]: time="2025-07-07T00:17:51.525498909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"3184e8caf540abf65cf0f0cabf879f31b7319f874aa401164f891862beb2d923\" pid:6017 exited_at:{seconds:1751847471 nanos:524987877}" Jul 7 00:17:53.722871 containerd[1908]: time="2025-07-07T00:17:53.722815324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a942e0e2de7bc735a378fa09fbecf07f4b8e08a76a3c24c42eb2b390a00b6ea\" id:\"339349f65984811c27e0952a53fa81cdb10f3e18e393206ff3cb5f7b84be342b\" pid:6040 exited_at:{seconds:1751847473 nanos:721720650}" Jul 7 00:17:53.750387 sshd[5215]: Connection closed by 147.75.109.163 port 56732 Jul 7 00:17:53.751905 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:53.756662 systemd[1]: sshd@28-172.31.30.121:22-147.75.109.163:56732.service: Deactivated successfully. Jul 7 00:17:53.759741 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 00:17:53.761012 systemd-logind[1861]: Session 29 logged out. Waiting for processes to exit. Jul 7 00:17:53.762879 systemd-logind[1861]: Removed session 29. Jul 7 00:18:11.897667 systemd[1]: cri-containerd-90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d.scope: Deactivated successfully. Jul 7 00:18:11.898624 systemd[1]: cri-containerd-90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d.scope: Consumed 2.846s CPU time, 71.8M memory peak, 21.5M read from disk. Jul 7 00:18:11.902146 containerd[1908]: time="2025-07-07T00:18:11.902087712Z" level=info msg="received exit event container_id:\"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\" id:\"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\" pid:3061 exit_status:1 exited_at:{seconds:1751847491 nanos:900958231}" Jul 7 00:18:11.903738 containerd[1908]: time="2025-07-07T00:18:11.903684745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\" id:\"90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d\" pid:3061 exit_status:1 exited_at:{seconds:1751847491 nanos:900958231}" Jul 7 00:18:11.934658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d-rootfs.mount: Deactivated successfully. Jul 7 00:18:12.236028 kubelet[3220]: I0707 00:18:12.235891 3220 scope.go:117] "RemoveContainer" containerID="90b3c952fbb7ca0e8024d3fa345091b615e674f93b36e0b75e7d60b5a17b4f1d" Jul 7 00:18:12.242500 containerd[1908]: time="2025-07-07T00:18:12.242121984Z" level=info msg="CreateContainer within sandbox \"e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:18:12.263916 containerd[1908]: time="2025-07-07T00:18:12.263874804Z" level=info msg="Container 0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:12.268379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966715842.mount: Deactivated successfully. Jul 7 00:18:12.281789 containerd[1908]: time="2025-07-07T00:18:12.281703798Z" level=info msg="CreateContainer within sandbox \"e67c3a20b9678eab0a577f4420a040e7dfbc51cda3e242a6eed276ad8891a48a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6\"" Jul 7 00:18:12.282399 containerd[1908]: time="2025-07-07T00:18:12.282362022Z" level=info msg="StartContainer for \"0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6\"" Jul 7 00:18:12.283951 containerd[1908]: time="2025-07-07T00:18:12.283907467Z" level=info msg="connecting to shim 0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6" address="unix:///run/containerd/s/09de36eed75ac9a6280ffdc71906afe78328ab96f610eb4573d58b6337b4a42c" protocol=ttrpc version=3 Jul 7 00:18:12.318478 systemd[1]: Started cri-containerd-0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6.scope - libcontainer container 0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6. Jul 7 00:18:12.383739 containerd[1908]: time="2025-07-07T00:18:12.383694792Z" level=info msg="StartContainer for \"0556ed584d38796186b651237d3a77de2591b712353b041e478e09930767e7d6\" returns successfully" Jul 7 00:18:13.155230 kubelet[3220]: E0707 00:18:13.150163 3220 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:18:16.440099 systemd[1]: cri-containerd-3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406.scope: Deactivated successfully. Jul 7 00:18:16.441022 systemd[1]: cri-containerd-3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406.scope: Consumed 2.729s CPU time, 31.3M memory peak, 14.1M read from disk. Jul 7 00:18:16.444384 containerd[1908]: time="2025-07-07T00:18:16.442665870Z" level=info msg="received exit event container_id:\"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\" id:\"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\" pid:3042 exit_status:1 exited_at:{seconds:1751847496 nanos:442309762}" Jul 7 00:18:16.444384 containerd[1908]: time="2025-07-07T00:18:16.443050560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\" id:\"3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406\" pid:3042 exit_status:1 exited_at:{seconds:1751847496 nanos:442309762}" Jul 7 00:18:16.474593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406-rootfs.mount: Deactivated successfully. Jul 7 00:18:17.269386 kubelet[3220]: I0707 00:18:17.269347 3220 scope.go:117] "RemoveContainer" containerID="3809a40aa44046317bb0a94fdb727a5a2ec911f8bd6862115a57815fed937406" Jul 7 00:18:17.271758 containerd[1908]: time="2025-07-07T00:18:17.271716682Z" level=info msg="CreateContainer within sandbox \"c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:18:17.289630 containerd[1908]: time="2025-07-07T00:18:17.288879306Z" level=info msg="Container c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:17.293100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285707888.mount: Deactivated successfully. Jul 7 00:18:17.305145 containerd[1908]: time="2025-07-07T00:18:17.305093509Z" level=info msg="CreateContainer within sandbox \"c6b7c8954356cfcd044c0f498be36b26814cb3351be81389e969d7fcf832a64b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7\"" Jul 7 00:18:17.305748 containerd[1908]: time="2025-07-07T00:18:17.305720559Z" level=info msg="StartContainer for \"c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7\"" Jul 7 00:18:17.307376 containerd[1908]: time="2025-07-07T00:18:17.307108834Z" level=info msg="connecting to shim c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7" address="unix:///run/containerd/s/f5ff0195ce43d81a27898884ee08b3538e20fb13f2c3bc1ff8710d2117615a10" protocol=ttrpc version=3 Jul 7 00:18:17.342510 systemd[1]: Started cri-containerd-c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7.scope - libcontainer container c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7. Jul 7 00:18:17.403663 containerd[1908]: time="2025-07-07T00:18:17.403620173Z" level=info msg="StartContainer for \"c688cf6abbd7855b370e7eada814f0b6e8641bdabe8609eb0da50aafb78e09d7\" returns successfully" Jul 7 00:18:23.151599 kubelet[3220]: E0707 00:18:23.151523 3220 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-121?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"