Jul 6 23:35:29.917100 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:35:29.917126 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:35:29.917140 kernel: BIOS-provided physical RAM map: Jul 6 23:35:29.917147 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:35:29.917153 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 6 23:35:29.917160 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 6 23:35:29.917168 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 6 23:35:29.917175 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 6 23:35:29.917182 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 6 23:35:29.917188 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 6 23:35:29.917198 kernel: NX (Execute Disable) protection: active Jul 6 23:35:29.917205 kernel: APIC: Static calls initialized Jul 6 23:35:29.917212 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 6 23:35:29.917219 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 6 23:35:29.917228 kernel: extended physical RAM map: Jul 6 23:35:29.917235 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:35:29.917245 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 6 23:35:29.917253 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 6 23:35:29.917261 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 6 23:35:29.917705 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 6 23:35:29.917717 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 6 23:35:29.917725 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 6 23:35:29.917733 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 6 23:35:29.917741 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 6 23:35:29.917748 kernel: efi: EFI v2.7 by EDK II Jul 6 23:35:29.917756 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 6 23:35:29.917768 kernel: secureboot: Secure boot disabled Jul 6 23:35:29.917776 kernel: SMBIOS 2.7 present. Jul 6 23:35:29.917784 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 6 23:35:29.917791 kernel: Hypervisor detected: KVM Jul 6 23:35:29.917799 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:35:29.917807 kernel: kvm-clock: using sched offset of 3899758680 cycles Jul 6 23:35:29.917815 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:35:29.917823 kernel: tsc: Detected 2500.006 MHz processor Jul 6 23:35:29.917832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:35:29.917840 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:35:29.917847 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 6 23:35:29.917858 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:35:29.917866 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:35:29.917875 kernel: Using GB pages for direct mapping Jul 6 23:35:29.917887 kernel: ACPI: Early table checksum verification disabled Jul 6 23:35:29.917895 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 6 23:35:29.917904 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:35:29.917915 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:35:29.917923 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 6 23:35:29.917931 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 6 23:35:29.917939 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 6 23:35:29.917948 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:35:29.917956 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:35:29.917964 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 6 23:35:29.917973 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 6 23:35:29.917984 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:35:29.917992 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:35:29.918000 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 6 23:35:29.918009 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 6 23:35:29.918017 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 6 23:35:29.918025 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 6 23:35:29.918033 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 6 23:35:29.918042 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 6 23:35:29.918050 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 6 23:35:29.918060 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 6 23:35:29.918069 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 6 23:35:29.918077 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 6 23:35:29.918085 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 6 23:35:29.918093 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 6 23:35:29.918101 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:35:29.918109 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:35:29.918118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 6 23:35:29.918126 kernel: NUMA: Initialized distance table, cnt=1 Jul 6 23:35:29.918137 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jul 6 23:35:29.918145 kernel: Zone ranges: Jul 6 23:35:29.918153 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:35:29.918161 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 6 23:35:29.918170 kernel: Normal empty Jul 6 23:35:29.918178 kernel: Movable zone start for each node Jul 6 23:35:29.918186 kernel: Early memory node ranges Jul 6 23:35:29.918194 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:35:29.918202 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 6 23:35:29.918213 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 6 23:35:29.918222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 6 23:35:29.918230 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:35:29.918238 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:35:29.918246 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 6 23:35:29.918255 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 6 23:35:29.918264 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 6 23:35:29.919334 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:35:29.919345 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 6 23:35:29.919358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:35:29.919367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:35:29.919375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:35:29.919384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:35:29.919392 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:35:29.919401 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:35:29.919409 kernel: TSC deadline timer available Jul 6 23:35:29.919418 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:35:29.919426 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:35:29.919434 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 6 23:35:29.919446 kernel: Booting paravirtualized kernel on KVM Jul 6 23:35:29.919454 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:35:29.919463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:35:29.919472 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:35:29.919480 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:35:29.919488 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:35:29.919497 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:35:29.919505 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:35:29.919515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:35:29.919528 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:35:29.919536 kernel: random: crng init done Jul 6 23:35:29.919544 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:35:29.919552 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:35:29.919561 kernel: Fallback order for Node 0: 0 Jul 6 23:35:29.919569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jul 6 23:35:29.919577 kernel: Policy zone: DMA32 Jul 6 23:35:29.919585 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:35:29.919597 kernel: Memory: 1872532K/2037804K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 165016K reserved, 0K cma-reserved) Jul 6 23:35:29.919605 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:35:29.919613 kernel: Kernel/User page tables isolation: enabled Jul 6 23:35:29.919622 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:35:29.919639 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:35:29.919650 kernel: Dynamic Preempt: voluntary Jul 6 23:35:29.919659 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:35:29.919669 kernel: rcu: RCU event tracing is enabled. Jul 6 23:35:29.919678 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:35:29.919687 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:35:29.919696 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:35:29.919708 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:35:29.919716 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:35:29.919725 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:35:29.919734 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:35:29.919743 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:35:29.919755 kernel: Console: colour dummy device 80x25 Jul 6 23:35:29.919764 kernel: printk: console [tty0] enabled Jul 6 23:35:29.919773 kernel: printk: console [ttyS0] enabled Jul 6 23:35:29.919782 kernel: ACPI: Core revision 20230628 Jul 6 23:35:29.919791 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 6 23:35:29.919800 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:35:29.919809 kernel: x2apic enabled Jul 6 23:35:29.919818 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:35:29.919827 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Jul 6 23:35:29.919839 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Jul 6 23:35:29.919848 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:35:29.919857 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:35:29.919866 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:35:29.919875 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:35:29.919883 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:35:29.919892 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:35:29.919901 kernel: RETBleed: Vulnerable Jul 6 23:35:29.919909 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:35:29.919918 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:35:29.919930 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:35:29.919938 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:35:29.919947 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:35:29.919956 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:35:29.919964 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:35:29.919973 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:35:29.919982 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 6 23:35:29.919990 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 6 23:35:29.919999 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:35:29.920008 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:35:29.920016 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:35:29.920028 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 6 23:35:29.920036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:35:29.920045 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 6 23:35:29.920054 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 6 23:35:29.920062 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 6 23:35:29.920071 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 6 23:35:29.920080 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 6 23:35:29.920088 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 6 23:35:29.920097 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 6 23:35:29.920106 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:35:29.920114 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:35:29.920123 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:35:29.920134 kernel: landlock: Up and running. Jul 6 23:35:29.920143 kernel: SELinux: Initializing. Jul 6 23:35:29.920152 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:35:29.920160 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:35:29.920169 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:35:29.920178 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:35:29.920187 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:35:29.920196 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:35:29.920205 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:35:29.920214 kernel: signal: max sigframe size: 3632 Jul 6 23:35:29.920226 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:35:29.920235 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:35:29.920244 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:35:29.920253 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:35:29.920261 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:35:29.921309 kernel: .... node #0, CPUs: #1 Jul 6 23:35:29.921325 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 6 23:35:29.921335 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:35:29.921349 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:35:29.921358 kernel: smpboot: Max logical packages: 1 Jul 6 23:35:29.921368 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Jul 6 23:35:29.921377 kernel: devtmpfs: initialized Jul 6 23:35:29.921386 kernel: x86/mm: Memory block size: 128MB Jul 6 23:35:29.921395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 6 23:35:29.921404 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:35:29.921413 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:35:29.921422 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:35:29.921434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:35:29.921443 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:35:29.921452 kernel: audit: type=2000 audit(1751844929.472:1): state=initialized audit_enabled=0 res=1 Jul 6 23:35:29.921461 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:35:29.921470 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:35:29.921479 kernel: cpuidle: using governor menu Jul 6 23:35:29.921488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:35:29.921497 kernel: dca service started, version 1.12.1 Jul 6 23:35:29.921506 kernel: PCI: Using configuration type 1 for base access Jul 6 23:35:29.921517 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:35:29.921527 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:35:29.921535 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:35:29.921544 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:35:29.921553 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:35:29.921562 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:35:29.921571 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:35:29.921580 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:35:29.921589 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 6 23:35:29.921600 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:35:29.921610 kernel: ACPI: Interpreter enabled Jul 6 23:35:29.921619 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:35:29.921627 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:35:29.921637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:35:29.921646 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:35:29.921655 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:35:29.921664 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:35:29.921833 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:35:29.921941 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:35:29.922037 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:35:29.922048 kernel: acpiphp: Slot [3] registered Jul 6 23:35:29.922057 kernel: acpiphp: Slot [4] registered Jul 6 23:35:29.922066 kernel: acpiphp: Slot [5] registered Jul 6 23:35:29.922075 kernel: acpiphp: Slot [6] registered Jul 6 23:35:29.922084 kernel: acpiphp: Slot [7] registered Jul 6 23:35:29.922093 kernel: acpiphp: Slot [8] registered Jul 6 23:35:29.922105 kernel: acpiphp: Slot [9] registered Jul 6 23:35:29.922114 kernel: acpiphp: Slot [10] registered Jul 6 23:35:29.922123 kernel: acpiphp: Slot [11] registered Jul 6 23:35:29.922131 kernel: acpiphp: Slot [12] registered Jul 6 23:35:29.922140 kernel: acpiphp: Slot [13] registered Jul 6 23:35:29.922149 kernel: acpiphp: Slot [14] registered Jul 6 23:35:29.922158 kernel: acpiphp: Slot [15] registered Jul 6 23:35:29.922167 kernel: acpiphp: Slot [16] registered Jul 6 23:35:29.922175 kernel: acpiphp: Slot [17] registered Jul 6 23:35:29.922186 kernel: acpiphp: Slot [18] registered Jul 6 23:35:29.922195 kernel: acpiphp: Slot [19] registered Jul 6 23:35:29.922204 kernel: acpiphp: Slot [20] registered Jul 6 23:35:29.922213 kernel: acpiphp: Slot [21] registered Jul 6 23:35:29.922222 kernel: acpiphp: Slot [22] registered Jul 6 23:35:29.922231 kernel: acpiphp: Slot [23] registered Jul 6 23:35:29.922239 kernel: acpiphp: Slot [24] registered Jul 6 23:35:29.922248 kernel: acpiphp: Slot [25] registered Jul 6 23:35:29.922257 kernel: acpiphp: Slot [26] registered Jul 6 23:35:29.922265 kernel: acpiphp: Slot [27] registered Jul 6 23:35:29.923320 kernel: acpiphp: Slot [28] registered Jul 6 23:35:29.923330 kernel: acpiphp: Slot [29] registered Jul 6 23:35:29.923340 kernel: acpiphp: Slot [30] registered Jul 6 23:35:29.923349 kernel: acpiphp: Slot [31] registered Jul 6 23:35:29.923358 kernel: PCI host bridge to bus 0000:00 Jul 6 23:35:29.923488 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:35:29.923579 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:35:29.923665 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:35:29.923754 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:35:29.923838 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 6 23:35:29.923924 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:35:29.924042 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:35:29.924149 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:35:29.924253 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 6 23:35:29.925429 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 6 23:35:29.925537 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 6 23:35:29.925635 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 6 23:35:29.925732 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 6 23:35:29.925828 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 6 23:35:29.925923 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 6 23:35:29.926021 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 6 23:35:29.926122 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Jul 6 23:35:29.926224 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 6 23:35:29.926340 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jul 6 23:35:29.926437 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:35:29.926531 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jul 6 23:35:29.926626 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:35:29.926729 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 6 23:35:29.926829 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jul 6 23:35:29.926931 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 6 23:35:29.927027 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jul 6 23:35:29.927039 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:35:29.927056 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:35:29.927065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:35:29.927075 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:35:29.927088 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:35:29.927097 kernel: iommu: Default domain type: Translated Jul 6 23:35:29.927106 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:35:29.927115 kernel: efivars: Registered efivars operations Jul 6 23:35:29.927125 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:35:29.927134 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:35:29.927143 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 6 23:35:29.927152 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 6 23:35:29.927161 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 6 23:35:29.927260 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 6 23:35:29.929518 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 6 23:35:29.929624 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:35:29.929637 kernel: vgaarb: loaded Jul 6 23:35:29.929648 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 6 23:35:29.929657 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 6 23:35:29.929667 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:35:29.929676 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:35:29.929686 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:35:29.929700 kernel: pnp: PnP ACPI init Jul 6 23:35:29.929710 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:35:29.929719 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:35:29.929728 kernel: NET: Registered PF_INET protocol family Jul 6 23:35:29.929738 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:35:29.929747 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:35:29.929756 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:35:29.929765 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:35:29.929777 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:35:29.929786 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:35:29.929795 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:35:29.929804 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:35:29.929813 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:35:29.929822 kernel: NET: Registered PF_XDP protocol family Jul 6 23:35:29.929919 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:35:29.930009 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:35:29.930096 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:35:29.930187 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:35:29.931300 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 6 23:35:29.931426 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:35:29.931440 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:35:29.931451 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:35:29.931461 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Jul 6 23:35:29.931470 kernel: clocksource: Switched to clocksource tsc Jul 6 23:35:29.931479 kernel: Initialise system trusted keyrings Jul 6 23:35:29.931492 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:35:29.931501 kernel: Key type asymmetric registered Jul 6 23:35:29.931510 kernel: Asymmetric key parser 'x509' registered Jul 6 23:35:29.931519 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:35:29.931528 kernel: io scheduler mq-deadline registered Jul 6 23:35:29.931537 kernel: io scheduler kyber registered Jul 6 23:35:29.931546 kernel: io scheduler bfq registered Jul 6 23:35:29.931555 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:35:29.931564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:35:29.931573 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:35:29.931585 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:35:29.931594 kernel: i8042: Warning: Keylock active Jul 6 23:35:29.931603 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:35:29.931612 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:35:29.931725 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 6 23:35:29.931819 kernel: rtc_cmos 00:00: registered as rtc0 Jul 6 23:35:29.931910 kernel: rtc_cmos 00:00: setting system clock to 2025-07-06T23:35:29 UTC (1751844929) Jul 6 23:35:29.932004 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 6 23:35:29.932015 kernel: intel_pstate: CPU model not supported Jul 6 23:35:29.932024 kernel: efifb: probing for efifb Jul 6 23:35:29.932033 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 6 23:35:29.932060 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 6 23:35:29.932072 kernel: efifb: scrolling: redraw Jul 6 23:35:29.932082 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:35:29.932092 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:35:29.932101 kernel: fb0: EFI VGA frame buffer device Jul 6 23:35:29.932114 kernel: pstore: Using crash dump compression: deflate Jul 6 23:35:29.932123 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:35:29.932133 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:35:29.932145 kernel: Segment Routing with IPv6 Jul 6 23:35:29.932154 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:35:29.932164 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:35:29.932173 kernel: Key type dns_resolver registered Jul 6 23:35:29.932183 kernel: IPI shorthand broadcast: enabled Jul 6 23:35:29.932192 kernel: sched_clock: Marking stable (514003363, 141636364)->(737832849, -82193122) Jul 6 23:35:29.932205 kernel: registered taskstats version 1 Jul 6 23:35:29.932214 kernel: Loading compiled-in X.509 certificates Jul 6 23:35:29.932224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:35:29.932233 kernel: Key type .fscrypt registered Jul 6 23:35:29.932242 kernel: Key type fscrypt-provisioning registered Jul 6 23:35:29.932252 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:35:29.932261 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:35:29.935389 kernel: ima: No architecture policies found Jul 6 23:35:29.935408 kernel: clk: Disabling unused clocks Jul 6 23:35:29.935429 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:35:29.935440 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:35:29.935449 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:35:29.935459 kernel: Run /init as init process Jul 6 23:35:29.935469 kernel: with arguments: Jul 6 23:35:29.935479 kernel: /init Jul 6 23:35:29.935488 kernel: with environment: Jul 6 23:35:29.935497 kernel: HOME=/ Jul 6 23:35:29.935507 kernel: TERM=linux Jul 6 23:35:29.935520 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:35:29.935531 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:35:29.935545 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:35:29.935556 systemd[1]: Detected virtualization amazon. Jul 6 23:35:29.935566 systemd[1]: Detected architecture x86-64. Jul 6 23:35:29.935579 systemd[1]: Running in initrd. Jul 6 23:35:29.935588 systemd[1]: No hostname configured, using default hostname. Jul 6 23:35:29.935599 systemd[1]: Hostname set to . Jul 6 23:35:29.935609 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:35:29.935618 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:35:29.935628 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:35:29.935639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:35:29.935653 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:35:29.935663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:35:29.935673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:35:29.935683 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:35:29.935694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:35:29.935705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:35:29.935715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:35:29.935728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:35:29.935738 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:35:29.935748 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:35:29.935758 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:35:29.935767 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:35:29.935777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:35:29.935787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:35:29.935797 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:35:29.935810 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:35:29.935820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:35:29.935830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:35:29.935840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:35:29.935849 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:35:29.935860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:35:29.935870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:35:29.935880 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:35:29.935890 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:35:29.935907 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:35:29.935917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:35:29.935927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:35:29.935936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:35:29.935982 systemd-journald[179]: Collecting audit messages is disabled. Jul 6 23:35:29.936009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:35:29.936021 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:35:29.936034 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:35:29.936048 systemd-journald[179]: Journal started Jul 6 23:35:29.936070 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2cf6cbae6476ca7ae63f96ea8804b3) is 4.7M, max 38.1M, 33.4M free. Jul 6 23:35:29.938708 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:35:29.940336 systemd-modules-load[180]: Inserted module 'overlay' Jul 6 23:35:29.942354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:29.943867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:35:29.952548 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:35:29.955411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:35:29.970182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:35:29.984841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:35:29.987609 systemd-modules-load[180]: Inserted module 'br_netfilter' Jul 6 23:35:29.988382 kernel: Bridge firewalling registered Jul 6 23:35:29.990091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:35:29.991841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:35:29.995609 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:35:30.007303 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:35:30.006189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:35:30.010419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:35:30.011189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:35:30.016842 dracut-cmdline[210]: dracut-dracut-053 Jul 6 23:35:30.021741 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:35:30.033149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:35:30.042489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:35:30.092047 systemd-resolved[238]: Positive Trust Anchors: Jul 6 23:35:30.093015 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:35:30.093087 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:35:30.102199 systemd-resolved[238]: Defaulting to hostname 'linux'. Jul 6 23:35:30.103610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:35:30.104376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:35:30.121302 kernel: SCSI subsystem initialized Jul 6 23:35:30.132304 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:35:30.143304 kernel: iscsi: registered transport (tcp) Jul 6 23:35:30.165534 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:35:30.165614 kernel: QLogic iSCSI HBA Driver Jul 6 23:35:30.204551 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:35:30.209471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:35:30.236698 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:35:30.236777 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:35:30.236801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:35:30.280324 kernel: raid6: avx512x4 gen() 18155 MB/s Jul 6 23:35:30.298321 kernel: raid6: avx512x2 gen() 18030 MB/s Jul 6 23:35:30.316324 kernel: raid6: avx512x1 gen() 18049 MB/s Jul 6 23:35:30.334315 kernel: raid6: avx2x4 gen() 17797 MB/s Jul 6 23:35:30.352319 kernel: raid6: avx2x2 gen() 17906 MB/s Jul 6 23:35:30.370432 kernel: raid6: avx2x1 gen() 14021 MB/s Jul 6 23:35:30.370481 kernel: raid6: using algorithm avx512x4 gen() 18155 MB/s Jul 6 23:35:30.389480 kernel: raid6: .... xor() 7967 MB/s, rmw enabled Jul 6 23:35:30.389536 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:35:30.411319 kernel: xor: automatically using best checksumming function avx Jul 6 23:35:30.567301 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:35:30.577242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:35:30.586560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:35:30.601679 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jul 6 23:35:30.607820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:35:30.615452 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:35:30.635525 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jul 6 23:35:30.666879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:35:30.673438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:35:30.726767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:35:30.734745 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:35:30.765410 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:35:30.769733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:35:30.771537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:35:30.772754 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:35:30.780552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:35:30.797658 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:35:30.821985 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:35:30.822288 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:35:30.827294 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 6 23:35:30.836389 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:45:1f:04:d3:45 Jul 6 23:35:30.838300 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:35:30.854104 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:35:30.869651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:35:30.877931 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:35:30.877967 kernel: AES CTR mode by8 optimization enabled Jul 6 23:35:30.869841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:35:30.870733 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:35:30.871414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:35:30.871600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:30.874096 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:35:30.881633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:35:30.906432 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:35:30.906678 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:35:30.913870 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:35:30.920463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:35:30.929402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:35:30.929437 kernel: GPT:9289727 != 16777215 Jul 6 23:35:30.929456 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:35:30.929475 kernel: GPT:9289727 != 16777215 Jul 6 23:35:30.929493 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:35:30.929520 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:35:30.920646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:30.931469 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:35:30.942014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:35:30.954593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:30.958554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:35:30.988693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:35:31.029316 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (454) Jul 6 23:35:31.037294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (460) Jul 6 23:35:31.069070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:35:31.099259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:35:31.108472 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:35:31.108997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:35:31.127992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:35:31.134540 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:35:31.141551 disk-uuid[637]: Primary Header is updated. Jul 6 23:35:31.141551 disk-uuid[637]: Secondary Entries is updated. Jul 6 23:35:31.141551 disk-uuid[637]: Secondary Header is updated. Jul 6 23:35:31.148292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:35:32.160457 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:35:32.160513 disk-uuid[638]: The operation has completed successfully. Jul 6 23:35:32.315470 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:35:32.315613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:35:32.342468 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:35:32.346743 sh[898]: Success Jul 6 23:35:32.361307 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:35:32.455939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:35:32.471471 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:35:32.476924 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:35:32.504978 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:35:32.505047 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:35:32.505061 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:35:32.508308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:35:32.508373 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:35:32.610316 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:35:32.624435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:35:32.625515 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:35:32.629469 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:35:32.631446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:35:32.660785 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:35:32.660864 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:35:32.662744 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:35:32.680302 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:35:32.688298 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:35:32.690290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:35:32.699605 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:35:32.729157 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:35:32.735477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:35:32.761925 systemd-networkd[1088]: lo: Link UP Jul 6 23:35:32.761936 systemd-networkd[1088]: lo: Gained carrier Jul 6 23:35:32.764131 systemd-networkd[1088]: Enumeration completed Jul 6 23:35:32.764569 systemd-networkd[1088]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:35:32.764575 systemd-networkd[1088]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:35:32.766344 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:35:32.768900 systemd[1]: Reached target network.target - Network. Jul 6 23:35:32.770205 systemd-networkd[1088]: eth0: Link UP Jul 6 23:35:32.770213 systemd-networkd[1088]: eth0: Gained carrier Jul 6 23:35:32.770230 systemd-networkd[1088]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:35:32.788367 systemd-networkd[1088]: eth0: DHCPv4 address 172.31.20.250/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:35:33.047607 ignition[1042]: Ignition 2.20.0 Jul 6 23:35:33.047619 ignition[1042]: Stage: fetch-offline Jul 6 23:35:33.047804 ignition[1042]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:33.047813 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:33.048134 ignition[1042]: Ignition finished successfully Jul 6 23:35:33.049777 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:35:33.056446 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:35:33.068851 ignition[1098]: Ignition 2.20.0 Jul 6 23:35:33.068862 ignition[1098]: Stage: fetch Jul 6 23:35:33.069169 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:33.069178 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:33.069257 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:33.087729 ignition[1098]: PUT result: OK Jul 6 23:35:33.090707 ignition[1098]: parsed url from cmdline: "" Jul 6 23:35:33.090721 ignition[1098]: no config URL provided Jul 6 23:35:33.090733 ignition[1098]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:35:33.090749 ignition[1098]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:35:33.090774 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:33.091787 ignition[1098]: PUT result: OK Jul 6 23:35:33.091857 ignition[1098]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:35:33.093826 ignition[1098]: GET result: OK Jul 6 23:35:33.093971 ignition[1098]: parsing config with SHA512: f986e3f226eb3f0ec3c0859abf239d104245a3e260f8e47d49933eae03914e14035e1170225818b39a7ed1ef026e98080a90d5b1668e2ac1472a95dbf87ebbf2 Jul 6 23:35:33.098974 unknown[1098]: fetched base config from "system" Jul 6 23:35:33.098990 unknown[1098]: fetched base config from "system" Jul 6 23:35:33.099662 ignition[1098]: fetch: fetch complete Jul 6 23:35:33.098997 unknown[1098]: fetched user config from "aws" Jul 6 23:35:33.099669 ignition[1098]: fetch: fetch passed Jul 6 23:35:33.099731 ignition[1098]: Ignition finished successfully Jul 6 23:35:33.102023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:35:33.106468 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:35:33.123786 ignition[1104]: Ignition 2.20.0 Jul 6 23:35:33.123800 ignition[1104]: Stage: kargs Jul 6 23:35:33.124306 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:33.124324 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:33.124455 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:33.125382 ignition[1104]: PUT result: OK Jul 6 23:35:33.128239 ignition[1104]: kargs: kargs passed Jul 6 23:35:33.128324 ignition[1104]: Ignition finished successfully Jul 6 23:35:33.129685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:35:33.135481 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:35:33.149167 ignition[1110]: Ignition 2.20.0 Jul 6 23:35:33.149182 ignition[1110]: Stage: disks Jul 6 23:35:33.149632 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:33.149646 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:33.149768 ignition[1110]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:33.150650 ignition[1110]: PUT result: OK Jul 6 23:35:33.153154 ignition[1110]: disks: disks passed Jul 6 23:35:33.153230 ignition[1110]: Ignition finished successfully Jul 6 23:35:33.154483 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:35:33.155545 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:35:33.155930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:35:33.156484 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:35:33.157015 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:35:33.157587 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:35:33.161448 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:35:33.200602 systemd-fsck[1118]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:35:33.203417 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:35:33.208398 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:35:33.308293 kernel: EXT4-fs (nvme0n1p9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:35:33.309465 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:35:33.310386 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:35:33.329468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:35:33.332726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:35:33.334641 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:35:33.335404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:35:33.335440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:35:33.346618 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:35:33.351554 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1137) Jul 6 23:35:33.354482 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:35:33.358815 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:35:33.358842 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:35:33.358855 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:35:33.362302 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:35:33.364354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:35:33.786924 initrd-setup-root[1161]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:35:33.817427 initrd-setup-root[1168]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:35:33.821514 initrd-setup-root[1175]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:35:33.826061 initrd-setup-root[1182]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:35:33.936383 systemd-networkd[1088]: eth0: Gained IPv6LL Jul 6 23:35:34.116988 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:35:34.125494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:35:34.128616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:35:34.137152 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:35:34.139292 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:35:34.173311 ignition[1249]: INFO : Ignition 2.20.0 Jul 6 23:35:34.173311 ignition[1249]: INFO : Stage: mount Jul 6 23:35:34.173311 ignition[1249]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:34.173311 ignition[1249]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:34.173311 ignition[1249]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:34.177683 ignition[1249]: INFO : PUT result: OK Jul 6 23:35:34.181215 ignition[1249]: INFO : mount: mount passed Jul 6 23:35:34.181215 ignition[1249]: INFO : Ignition finished successfully Jul 6 23:35:34.183379 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:35:34.184032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:35:34.189376 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:35:34.202484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:35:34.223294 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1262) Jul 6 23:35:34.227334 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:35:34.227403 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:35:34.227418 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:35:34.240300 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:35:34.242757 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:35:34.264625 ignition[1279]: INFO : Ignition 2.20.0 Jul 6 23:35:34.264625 ignition[1279]: INFO : Stage: files Jul 6 23:35:34.266051 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:34.266051 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:34.266051 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:34.267529 ignition[1279]: INFO : PUT result: OK Jul 6 23:35:34.269233 ignition[1279]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:35:34.270086 ignition[1279]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:35:34.270086 ignition[1279]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:35:34.310016 ignition[1279]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:35:34.310909 ignition[1279]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:35:34.310909 ignition[1279]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:35:34.310488 unknown[1279]: wrote ssh authorized keys file for user: core Jul 6 23:35:34.326313 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:35:34.327520 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:35:34.427427 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:35:34.628192 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:35:34.629354 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:35:34.629354 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:35:35.081204 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:35:35.200334 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:35:35.201263 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:35:35.210970 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:35:35.870811 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:35:36.234911 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:35:36.234911 ignition[1279]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:35:36.238294 ignition[1279]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:35:36.239857 ignition[1279]: INFO : files: files passed Jul 6 23:35:36.239857 ignition[1279]: INFO : Ignition finished successfully Jul 6 23:35:36.241574 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:35:36.248535 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:35:36.251458 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:35:36.256144 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:35:36.257129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:35:36.288185 initrd-setup-root-after-ignition[1307]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:35:36.290096 initrd-setup-root-after-ignition[1311]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:35:36.291465 initrd-setup-root-after-ignition[1307]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:35:36.292265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:35:36.293106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:35:36.297479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:35:36.330513 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:35:36.330629 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:35:36.332529 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:35:36.333486 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:35:36.334340 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:35:36.336156 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:35:36.353442 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:35:36.358501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:35:36.372228 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:35:36.373501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:35:36.374247 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:35:36.375186 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:35:36.375409 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:35:36.376149 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:35:36.376988 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:35:36.377771 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:35:36.378549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:35:36.379445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:35:36.380212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:35:36.380977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:35:36.381773 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:35:36.382535 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:35:36.383703 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:35:36.384419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:35:36.384609 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:35:36.385701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:35:36.386502 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:35:36.387323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:35:36.387474 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:35:36.388086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:35:36.388299 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:35:36.389387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:35:36.389583 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:35:36.390673 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:35:36.390836 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:35:36.398814 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:35:36.400220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:35:36.400452 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:35:36.405379 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:35:36.406765 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:35:36.406966 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:35:36.409170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:35:36.409408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:35:36.418705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:35:36.418835 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:35:36.428206 ignition[1331]: INFO : Ignition 2.20.0 Jul 6 23:35:36.430078 ignition[1331]: INFO : Stage: umount Jul 6 23:35:36.430078 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:35:36.430078 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:35:36.430078 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:35:36.434677 ignition[1331]: INFO : PUT result: OK Jul 6 23:35:36.436121 ignition[1331]: INFO : umount: umount passed Jul 6 23:35:36.436121 ignition[1331]: INFO : Ignition finished successfully Jul 6 23:35:36.438448 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:35:36.438615 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:35:36.440335 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:35:36.440399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:35:36.440916 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:35:36.440978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:35:36.442419 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:35:36.442487 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:35:36.442947 systemd[1]: Stopped target network.target - Network. Jul 6 23:35:36.443643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:35:36.443713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:35:36.444307 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:35:36.444849 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:35:36.450381 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:35:36.451091 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:35:36.452107 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:35:36.452790 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:35:36.452856 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:35:36.453425 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:35:36.453479 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:35:36.454012 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:35:36.454094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:35:36.454733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:35:36.454796 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:35:36.455627 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:35:36.456221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:35:36.458838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:35:36.463650 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:35:36.463798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:35:36.468063 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:35:36.468537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:35:36.468617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:35:36.472228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:35:36.472631 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:35:36.472779 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:35:36.475334 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:35:36.476051 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:35:36.476138 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:35:36.482413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:35:36.483208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:35:36.483311 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:35:36.484032 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:35:36.484100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:35:36.485487 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:35:36.485547 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:35:36.486057 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:35:36.492246 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:35:36.502544 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:35:36.502673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:35:36.505003 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:35:36.505154 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:35:36.506432 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:35:36.506495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:35:36.507667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:35:36.507722 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:35:36.508424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:35:36.508499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:35:36.509624 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:35:36.509693 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:35:36.510806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:35:36.510875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:35:36.517618 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:35:36.518250 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:35:36.518356 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:35:36.521467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:35:36.521547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:36.527961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:35:36.528101 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:35:36.557147 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:35:36.557263 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:35:36.558361 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:35:36.558861 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:35:36.558917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:35:36.576536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:35:36.596493 systemd[1]: Switching root. Jul 6 23:35:36.635260 systemd-journald[179]: Journal stopped Jul 6 23:35:38.366877 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jul 6 23:35:38.366955 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:35:38.366983 kernel: SELinux: policy capability open_perms=1 Jul 6 23:35:38.367000 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:35:38.367012 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:35:38.367023 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:35:38.367036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:35:38.367048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:35:38.367060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:35:38.367072 kernel: audit: type=1403 audit(1751844936.995:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:35:38.367089 systemd[1]: Successfully loaded SELinux policy in 67.089ms. Jul 6 23:35:38.367112 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.699ms. Jul 6 23:35:38.367126 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:35:38.367140 systemd[1]: Detected virtualization amazon. Jul 6 23:35:38.367153 systemd[1]: Detected architecture x86-64. Jul 6 23:35:38.367165 systemd[1]: Detected first boot. Jul 6 23:35:38.367178 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:35:38.367191 zram_generator::config[1376]: No configuration found. Jul 6 23:35:38.367205 kernel: Guest personality initialized and is inactive Jul 6 23:35:38.367217 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:35:38.367232 kernel: Initialized host personality Jul 6 23:35:38.367243 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:35:38.367256 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:35:38.367281 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:35:38.369372 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:35:38.369389 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:35:38.369404 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:35:38.369418 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:35:38.369430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:35:38.369452 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:35:38.369465 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:35:38.369478 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:35:38.369491 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:35:38.369503 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:35:38.369516 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:35:38.369529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:35:38.369542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:35:38.369555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:35:38.369570 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:35:38.369582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:35:38.369595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:35:38.369607 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:35:38.369620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:35:38.369633 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:35:38.369646 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:35:38.369661 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:35:38.369675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:35:38.369688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:35:38.369701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:35:38.369713 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:35:38.369727 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:35:38.369739 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:35:38.369752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:35:38.369765 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:35:38.369779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:35:38.369792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:35:38.369804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:35:38.369817 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:35:38.369829 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:35:38.369842 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:35:38.369854 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:35:38.369866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:35:38.369879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:35:38.369895 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:35:38.369907 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:35:38.369920 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:35:38.369932 systemd[1]: Reached target machines.target - Containers. Jul 6 23:35:38.369944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:35:38.369962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:35:38.369975 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:35:38.369987 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:35:38.370002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:35:38.370014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:35:38.370027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:35:38.370040 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:35:38.370053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:35:38.370065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:35:38.370078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:35:38.370090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:35:38.370103 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:35:38.370120 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:35:38.370133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:35:38.370146 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:35:38.370158 kernel: loop: module loaded Jul 6 23:35:38.370171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:35:38.370184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:35:38.370196 kernel: fuse: init (API version 7.39) Jul 6 23:35:38.370208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:35:38.370223 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:35:38.370235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:35:38.370248 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:35:38.370261 systemd[1]: Stopped verity-setup.service. Jul 6 23:35:38.376335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:35:38.376366 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:35:38.376379 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:35:38.376392 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:35:38.376404 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:35:38.376417 kernel: ACPI: bus type drm_connector registered Jul 6 23:35:38.376437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:35:38.376449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:35:38.376463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:35:38.376475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:35:38.376489 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:35:38.376502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:35:38.376515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:35:38.376528 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:35:38.376541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:35:38.376558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:35:38.376571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:35:38.376585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:35:38.376598 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:35:38.376610 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:35:38.376623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:35:38.376636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:35:38.376650 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:35:38.376696 systemd-journald[1455]: Collecting audit messages is disabled. Jul 6 23:35:38.376726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:35:38.376739 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:35:38.376753 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:35:38.376768 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:35:38.376783 systemd-journald[1455]: Journal started Jul 6 23:35:38.376808 systemd-journald[1455]: Runtime Journal (/run/log/journal/ec2cf6cbae6476ca7ae63f96ea8804b3) is 4.7M, max 38.1M, 33.4M free. Jul 6 23:35:38.039886 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:35:38.051786 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:35:38.052229 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:35:38.381904 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:35:38.385647 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:35:38.388714 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:35:38.392296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:35:38.401678 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:35:38.408817 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:35:38.408875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:35:38.415779 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:35:38.415856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:35:38.434403 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:35:38.439349 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:35:38.441379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:35:38.452387 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:35:38.459033 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:35:38.463560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:35:38.468958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:35:38.471053 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:35:38.472379 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:35:38.473510 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:35:38.474643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:35:38.491194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:35:38.498626 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:35:38.508963 kernel: loop0: detected capacity change from 0 to 62832 Jul 6 23:35:38.506533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:35:38.513426 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:35:38.520470 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:35:38.532478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:35:38.539638 systemd-journald[1455]: Time spent on flushing to /var/log/journal/ec2cf6cbae6476ca7ae63f96ea8804b3 is 74.412ms for 1018 entries. Jul 6 23:35:38.539638 systemd-journald[1455]: System Journal (/var/log/journal/ec2cf6cbae6476ca7ae63f96ea8804b3) is 8M, max 195.6M, 187.6M free. Jul 6 23:35:38.625702 systemd-journald[1455]: Received client request to flush runtime journal. Jul 6 23:35:38.579204 udevadm[1522]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:35:38.627935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:35:38.645156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:35:38.646905 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:35:38.655177 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:35:38.665535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:35:38.693050 kernel: loop1: detected capacity change from 0 to 138176 Jul 6 23:35:38.707553 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Jul 6 23:35:38.708008 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Jul 6 23:35:38.715587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:35:38.816037 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:35:38.872309 kernel: loop3: detected capacity change from 0 to 147912 Jul 6 23:35:38.987305 kernel: loop4: detected capacity change from 0 to 62832 Jul 6 23:35:39.002597 kernel: loop5: detected capacity change from 0 to 138176 Jul 6 23:35:39.032248 kernel: loop6: detected capacity change from 0 to 221472 Jul 6 23:35:39.055940 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:35:39.064388 kernel: loop7: detected capacity change from 0 to 147912 Jul 6 23:35:39.085608 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:35:39.086176 (sd-merge)[1536]: Merged extensions into '/usr'. Jul 6 23:35:39.090388 systemd[1]: Reload requested from client PID 1492 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:35:39.090406 systemd[1]: Reloading... Jul 6 23:35:39.169294 zram_generator::config[1563]: No configuration found. Jul 6 23:35:39.349837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:35:39.456470 systemd[1]: Reloading finished in 365 ms. Jul 6 23:35:39.470450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:35:39.471466 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:35:39.483864 systemd[1]: Starting ensure-sysext.service... Jul 6 23:35:39.486515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:35:39.490875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:35:39.520391 systemd[1]: Reload requested from client PID 1616 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:35:39.520414 systemd[1]: Reloading... Jul 6 23:35:39.549437 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:35:39.552438 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:35:39.559699 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:35:39.562003 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jul 6 23:35:39.562829 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jul 6 23:35:39.569032 systemd-udevd[1618]: Using default interface naming scheme 'v255'. Jul 6 23:35:39.574811 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:35:39.574828 systemd-tmpfiles[1617]: Skipping /boot Jul 6 23:35:39.593074 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:35:39.593217 systemd-tmpfiles[1617]: Skipping /boot Jul 6 23:35:39.677292 zram_generator::config[1649]: No configuration found. Jul 6 23:35:39.843489 (udev-worker)[1682]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:35:39.911108 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:35:39.918293 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:35:39.918386 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 6 23:35:39.918414 kernel: ACPI: button: Sleep Button [SLPF] Jul 6 23:35:39.936046 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 6 23:35:39.971568 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 6 23:35:39.997301 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1682) Jul 6 23:35:40.062040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:35:40.146772 ldconfig[1488]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:35:40.181331 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:35:40.277469 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:35:40.278022 systemd[1]: Reloading finished in 757 ms. Jul 6 23:35:40.292979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:35:40.293795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:35:40.303927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:35:40.328121 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:35:40.338666 systemd[1]: Finished ensure-sysext.service. Jul 6 23:35:40.362670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:35:40.369517 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:35:40.374618 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:35:40.379490 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:35:40.382168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:35:40.385482 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:35:40.389493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:35:40.393146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:35:40.401478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:35:40.405485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:35:40.406844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:35:40.414553 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:35:40.415212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:35:40.417579 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:35:40.429490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:35:40.440366 lvm[1815]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:35:40.440469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:35:40.441888 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:35:40.446446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:35:40.456615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:35:40.457831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:35:40.460109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:35:40.460597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:35:40.463935 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:35:40.464933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:35:40.489198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:35:40.489489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:35:40.495504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:35:40.496043 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:35:40.502302 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:35:40.509047 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:35:40.524096 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:35:40.532879 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:35:40.537689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:35:40.539856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:35:40.552112 lvm[1846]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:35:40.551332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:35:40.564353 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:35:40.581315 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:35:40.586259 augenrules[1857]: No rules Jul 6 23:35:40.589570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:35:40.590588 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:35:40.591333 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:35:40.609709 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:35:40.627628 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:35:40.636644 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:35:40.650537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:35:40.658667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:35:40.668345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:35:40.747477 systemd-resolved[1828]: Positive Trust Anchors: Jul 6 23:35:40.747503 systemd-resolved[1828]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:35:40.747556 systemd-resolved[1828]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:35:40.752613 systemd-resolved[1828]: Defaulting to hostname 'linux'. Jul 6 23:35:40.754210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:35:40.754651 systemd-networkd[1827]: lo: Link UP Jul 6 23:35:40.754788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:35:40.754970 systemd-networkd[1827]: lo: Gained carrier Jul 6 23:35:40.755244 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:35:40.755717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:35:40.756090 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:35:40.756598 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:35:40.756762 systemd-networkd[1827]: Enumeration completed Jul 6 23:35:40.756986 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:35:40.757346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:35:40.757592 systemd-networkd[1827]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:35:40.757649 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:35:40.757677 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:35:40.757972 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:35:40.758049 systemd-networkd[1827]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:35:40.760334 systemd-networkd[1827]: eth0: Link UP Jul 6 23:35:40.760437 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:35:40.762244 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:35:40.764575 systemd-networkd[1827]: eth0: Gained carrier Jul 6 23:35:40.764604 systemd-networkd[1827]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:35:40.765932 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:35:40.766536 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:35:40.766909 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:35:40.769718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:35:40.770567 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:35:40.771597 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:35:40.772121 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:35:40.772575 systemd[1]: Reached target network.target - Network. Jul 6 23:35:40.772898 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:35:40.773216 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:35:40.773663 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:35:40.773702 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:35:40.774709 systemd-networkd[1827]: eth0: DHCPv4 address 172.31.20.250/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:35:40.780437 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:35:40.782364 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:35:40.784460 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:35:40.788427 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:35:40.790150 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:35:40.790616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:35:40.792427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:35:40.794940 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:35:40.798379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:35:40.803044 jq[1883]: false Jul 6 23:35:40.807460 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:35:40.810504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:35:40.814292 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:35:40.820104 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:35:40.827118 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:35:40.831458 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:35:40.833784 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:35:40.835336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:35:40.838492 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:35:40.846860 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:35:40.851243 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:35:40.851682 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:35:40.855745 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:35:40.855929 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:35:40.913059 jq[1895]: true Jul 6 23:35:40.925914 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:35:40.931303 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:35:40.935527 jq[1919]: true Jul 6 23:35:40.931556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:35:40.947573 ntpd[1886]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:29 UTC 2025 (1): Starting Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:29 UTC 2025 (1): Starting Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: ---------------------------------------------------- Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: corporation. Support and training for ntp-4 are Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: available at https://www.nwtime.org/support Jul 6 23:35:40.947895 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: ---------------------------------------------------- Jul 6 23:35:40.947598 ntpd[1886]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:35:40.947606 ntpd[1886]: ---------------------------------------------------- Jul 6 23:35:40.947613 ntpd[1886]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:35:40.947620 ntpd[1886]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:35:40.947627 ntpd[1886]: corporation. Support and training for ntp-4 are Jul 6 23:35:40.947635 ntpd[1886]: available at https://www.nwtime.org/support Jul 6 23:35:40.947642 ntpd[1886]: ---------------------------------------------------- Jul 6 23:35:40.951205 dbus-daemon[1882]: [system] SELinux support is enabled Jul 6 23:35:40.957674 extend-filesystems[1884]: Found loop4 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found loop5 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found loop6 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found loop7 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p1 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p2 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p3 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found usr Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p4 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p6 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p7 Jul 6 23:35:40.957674 extend-filesystems[1884]: Found nvme0n1p9 Jul 6 23:35:40.957674 extend-filesystems[1884]: Checking size of /dev/nvme0n1p9 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: proto: precision = 0.057 usec (-24) Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: basedate set to 2025-06-24 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: gps base set to 2025-06-29 (week 2373) Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listen normally on 3 eth0 172.31.20.250:123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listen normally on 4 lo [::1]:123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: bind(21) AF_INET6 fe80::445:1fff:fe04:d345%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: unable to create socket on eth0 (5) for fe80::445:1fff:fe04:d345%2#123 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: failed to init interface for address fe80::445:1fff:fe04:d345%2 Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: Listening on routing socket on fd #21 for interface updates Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:35:40.988246 ntpd[1886]: 6 Jul 23:35:40 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:35:40.954653 (ntainerd)[1917]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:35:40.953637 ntpd[1886]: proto: precision = 0.057 usec (-24) Jul 6 23:35:41.000134 tar[1897]: linux-amd64/helm Jul 6 23:35:41.000875 update_engine[1894]: I20250706 23:35:40.978192 1894 main.cc:92] Flatcar Update Engine starting Jul 6 23:35:41.000875 update_engine[1894]: I20250706 23:35:40.997246 1894 update_check_scheduler.cc:74] Next update check in 10m17s Jul 6 23:35:40.955443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:35:40.954987 ntpd[1886]: basedate set to 2025-06-24 Jul 6 23:35:40.959925 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:35:40.955002 ntpd[1886]: gps base set to 2025-06-29 (week 2373) Jul 6 23:35:40.959957 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:35:40.963228 ntpd[1886]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:35:40.965217 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:35:40.967336 ntpd[1886]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:35:40.965241 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:35:40.967519 ntpd[1886]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:35:40.989454 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:35:40.967548 ntpd[1886]: Listen normally on 3 eth0 172.31.20.250:123 Jul 6 23:35:40.999599 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:35:40.967583 ntpd[1886]: Listen normally on 4 lo [::1]:123 Jul 6 23:35:41.003116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:35:40.967624 ntpd[1886]: bind(21) AF_INET6 fe80::445:1fff:fe04:d345%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:35:40.967639 ntpd[1886]: unable to create socket on eth0 (5) for fe80::445:1fff:fe04:d345%2#123 Jul 6 23:35:40.967650 ntpd[1886]: failed to init interface for address fe80::445:1fff:fe04:d345%2 Jul 6 23:35:40.967674 ntpd[1886]: Listening on routing socket on fd #21 for interface updates Jul 6 23:35:40.974628 dbus-daemon[1882]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1827 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:35:40.975479 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:35:40.975513 ntpd[1886]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:35:41.016081 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:35:41.037696 systemd-logind[1891]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:35:41.037719 systemd-logind[1891]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 6 23:35:41.037738 systemd-logind[1891]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:35:41.037951 systemd-logind[1891]: New seat seat0. Jul 6 23:35:41.038768 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:35:41.044066 extend-filesystems[1884]: Resized partition /dev/nvme0n1p9 Jul 6 23:35:41.057561 extend-filesystems[1953]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:35:41.069442 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:35:41.068902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:35:41.069592 bash[1952]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:35:41.085584 systemd[1]: Starting sshkeys.service... Jul 6 23:35:41.117190 coreos-metadata[1881]: Jul 06 23:35:41.117 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:35:41.128756 coreos-metadata[1881]: Jul 06 23:35:41.127 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:35:41.131708 coreos-metadata[1881]: Jul 06 23:35:41.131 INFO Fetch successful Jul 6 23:35:41.133298 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:35:41.134547 coreos-metadata[1881]: Jul 06 23:35:41.134 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:35:41.139492 coreos-metadata[1881]: Jul 06 23:35:41.138 INFO Fetch successful Jul 6 23:35:41.139492 coreos-metadata[1881]: Jul 06 23:35:41.138 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:35:41.140913 coreos-metadata[1881]: Jul 06 23:35:41.140 INFO Fetch successful Jul 6 23:35:41.142788 coreos-metadata[1881]: Jul 06 23:35:41.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:35:41.144528 coreos-metadata[1881]: Jul 06 23:35:41.144 INFO Fetch successful Jul 6 23:35:41.144528 coreos-metadata[1881]: Jul 06 23:35:41.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:35:41.145103 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:35:41.152680 coreos-metadata[1881]: Jul 06 23:35:41.152 INFO Fetch failed with 404: resource not found Jul 6 23:35:41.152680 coreos-metadata[1881]: Jul 06 23:35:41.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:35:41.153852 coreos-metadata[1881]: Jul 06 23:35:41.153 INFO Fetch successful Jul 6 23:35:41.153852 coreos-metadata[1881]: Jul 06 23:35:41.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:35:41.156067 coreos-metadata[1881]: Jul 06 23:35:41.155 INFO Fetch successful Jul 6 23:35:41.156067 coreos-metadata[1881]: Jul 06 23:35:41.155 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:35:41.156867 coreos-metadata[1881]: Jul 06 23:35:41.156 INFO Fetch successful Jul 6 23:35:41.158795 coreos-metadata[1881]: Jul 06 23:35:41.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:35:41.160458 coreos-metadata[1881]: Jul 06 23:35:41.159 INFO Fetch successful Jul 6 23:35:41.160458 coreos-metadata[1881]: Jul 06 23:35:41.159 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:35:41.165681 coreos-metadata[1881]: Jul 06 23:35:41.164 INFO Fetch successful Jul 6 23:35:41.219293 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:35:41.228302 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1680) Jul 6 23:35:41.241175 extend-filesystems[1953]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:35:41.241175 extend-filesystems[1953]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:35:41.241175 extend-filesystems[1953]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:35:41.250572 extend-filesystems[1884]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:35:41.246253 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:35:41.246478 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:35:41.252524 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:35:41.253294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:35:41.305799 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:35:41.310741 dbus-daemon[1882]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:35:41.311637 dbus-daemon[1882]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1933 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:35:41.321604 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:35:41.343737 polkitd[2013]: Started polkitd version 121 Jul 6 23:35:41.350682 coreos-metadata[1963]: Jul 06 23:35:41.350 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:35:41.353061 coreos-metadata[1963]: Jul 06 23:35:41.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:35:41.356386 coreos-metadata[1963]: Jul 06 23:35:41.356 INFO Fetch successful Jul 6 23:35:41.356478 coreos-metadata[1963]: Jul 06 23:35:41.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:35:41.357710 coreos-metadata[1963]: Jul 06 23:35:41.357 INFO Fetch successful Jul 6 23:35:41.362393 unknown[1963]: wrote ssh authorized keys file for user: core Jul 6 23:35:41.387127 polkitd[2013]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:35:41.387206 polkitd[2013]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:35:41.390229 polkitd[2013]: Finished loading, compiling and executing 2 rules Jul 6 23:35:41.396989 dbus-daemon[1882]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:35:41.399159 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:35:41.410992 polkitd[2013]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:35:41.433225 update-ssh-keys[2049]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:35:41.440098 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:35:41.451236 systemd[1]: Finished sshkeys.service. Jul 6 23:35:41.476097 locksmithd[1942]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:35:41.505556 systemd-hostnamed[1933]: Hostname set to (transient) Jul 6 23:35:41.505966 systemd-resolved[1828]: System hostname changed to 'ip-172-31-20-250'. Jul 6 23:35:41.545074 sshd_keygen[1927]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:35:41.589193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:35:41.596560 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:35:41.602885 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:35:41.603228 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:35:41.612844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:35:41.623505 containerd[1917]: time="2025-07-06T23:35:41.622546186Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:35:41.633847 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:35:41.645651 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:35:41.655451 containerd[1917]: time="2025-07-06T23:35:41.653803483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.654594 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:35:41.655178 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:35:41.655624 containerd[1917]: time="2025-07-06T23:35:41.655517145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:35:41.655624 containerd[1917]: time="2025-07-06T23:35:41.655544595Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:35:41.655624 containerd[1917]: time="2025-07-06T23:35:41.655560575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655697762Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655720931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655771562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655782078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655975310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.655988344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.656004348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.656012819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.656076004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.656248574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:35:41.657378 containerd[1917]: time="2025-07-06T23:35:41.656436507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:35:41.658069 containerd[1917]: time="2025-07-06T23:35:41.656451044Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:35:41.658069 containerd[1917]: time="2025-07-06T23:35:41.656524812Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:35:41.658069 containerd[1917]: time="2025-07-06T23:35:41.656564789Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660505867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660565841Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660584772Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660600378Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660646807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:35:41.660875 containerd[1917]: time="2025-07-06T23:35:41.660794222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:35:41.661098 containerd[1917]: time="2025-07-06T23:35:41.661068044Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:35:41.661200 containerd[1917]: time="2025-07-06T23:35:41.661184578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:35:41.661235 containerd[1917]: time="2025-07-06T23:35:41.661203887Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:35:41.661235 containerd[1917]: time="2025-07-06T23:35:41.661219118Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:35:41.661235 containerd[1917]: time="2025-07-06T23:35:41.661232189Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661310 containerd[1917]: time="2025-07-06T23:35:41.661244725Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661310 containerd[1917]: time="2025-07-06T23:35:41.661256844Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661310 containerd[1917]: time="2025-07-06T23:35:41.661286557Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661310 containerd[1917]: time="2025-07-06T23:35:41.661300805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661317978Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661330851Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661341500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661360104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661372751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661392 containerd[1917]: time="2025-07-06T23:35:41.661387846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661401431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661413148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661424894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661436402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661447933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661462175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661475826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661486219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661496990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661519 containerd[1917]: time="2025-07-06T23:35:41.661508295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661523504Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661543029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661555780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661566030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661620982Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661638889Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661650217Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:35:41.661716 containerd[1917]: time="2025-07-06T23:35:41.661660969Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:35:41.661906 containerd[1917]: time="2025-07-06T23:35:41.661720611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.661906 containerd[1917]: time="2025-07-06T23:35:41.661732550Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:35:41.661906 containerd[1917]: time="2025-07-06T23:35:41.661741556Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:35:41.661906 containerd[1917]: time="2025-07-06T23:35:41.661754024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:35:41.662593 containerd[1917]: time="2025-07-06T23:35:41.662028199Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:35:41.662593 containerd[1917]: time="2025-07-06T23:35:41.662072156Z" level=info msg="Connect containerd service" Jul 6 23:35:41.662593 containerd[1917]: time="2025-07-06T23:35:41.662107875Z" level=info msg="using legacy CRI server" Jul 6 23:35:41.662593 containerd[1917]: time="2025-07-06T23:35:41.662114698Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:35:41.662593 containerd[1917]: time="2025-07-06T23:35:41.662230018Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:35:41.663104 containerd[1917]: time="2025-07-06T23:35:41.662859440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:35:41.663140 containerd[1917]: time="2025-07-06T23:35:41.663096848Z" level=info msg="Start subscribing containerd event" Jul 6 23:35:41.663162 containerd[1917]: time="2025-07-06T23:35:41.663141496Z" level=info msg="Start recovering state" Jul 6 23:35:41.663401 containerd[1917]: time="2025-07-06T23:35:41.663199073Z" level=info msg="Start event monitor" Jul 6 23:35:41.663401 containerd[1917]: time="2025-07-06T23:35:41.663213199Z" level=info msg="Start snapshots syncer" Jul 6 23:35:41.663401 containerd[1917]: time="2025-07-06T23:35:41.663221483Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:35:41.663401 containerd[1917]: time="2025-07-06T23:35:41.663228816Z" level=info msg="Start streaming server" Jul 6 23:35:41.663645 containerd[1917]: time="2025-07-06T23:35:41.663623655Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:35:41.663768 containerd[1917]: time="2025-07-06T23:35:41.663737351Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:35:41.663929 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:35:41.668022 containerd[1917]: time="2025-07-06T23:35:41.667856278Z" level=info msg="containerd successfully booted in 0.046132s" Jul 6 23:35:41.906264 tar[1897]: linux-amd64/LICENSE Jul 6 23:35:41.906481 tar[1897]: linux-amd64/README.md Jul 6 23:35:41.917430 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:35:41.948031 ntpd[1886]: bind(24) AF_INET6 fe80::445:1fff:fe04:d345%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:35:41.948412 ntpd[1886]: 6 Jul 23:35:41 ntpd[1886]: bind(24) AF_INET6 fe80::445:1fff:fe04:d345%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:35:41.948412 ntpd[1886]: 6 Jul 23:35:41 ntpd[1886]: unable to create socket on eth0 (6) for fe80::445:1fff:fe04:d345%2#123 Jul 6 23:35:41.948412 ntpd[1886]: 6 Jul 23:35:41 ntpd[1886]: failed to init interface for address fe80::445:1fff:fe04:d345%2 Jul 6 23:35:41.948076 ntpd[1886]: unable to create socket on eth0 (6) for fe80::445:1fff:fe04:d345%2#123 Jul 6 23:35:41.948089 ntpd[1886]: failed to init interface for address fe80::445:1fff:fe04:d345%2 Jul 6 23:35:42.064503 systemd-networkd[1827]: eth0: Gained IPv6LL Jul 6 23:35:42.067747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:35:42.068938 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:35:42.074653 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:35:42.078676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:35:42.083388 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:35:42.125180 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:35:42.165669 amazon-ssm-agent[2108]: Initializing new seelog logger Jul 6 23:35:42.165669 amazon-ssm-agent[2108]: New Seelog Logger Creation Complete Jul 6 23:35:42.165669 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.165669 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.166119 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 processing appconfig overrides Jul 6 23:35:42.166432 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.166432 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.166553 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 processing appconfig overrides Jul 6 23:35:42.166818 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.166818 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.166897 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 processing appconfig overrides Jul 6 23:35:42.167337 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO Proxy environment variables: Jul 6 23:35:42.169301 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.169301 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:35:42.169425 amazon-ssm-agent[2108]: 2025/07/06 23:35:42 processing appconfig overrides Jul 6 23:35:42.268382 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO https_proxy: Jul 6 23:35:42.366231 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO http_proxy: Jul 6 23:35:42.399960 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO no_proxy: Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO Agent will take identity from EC2 Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 6 23:35:42.400057 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [Registrar] Starting registrar module Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:35:42.400232 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:35:42.465040 amazon-ssm-agent[2108]: 2025-07-06 23:35:42 INFO [CredentialRefresher] Next credential rotation will be in 30.091659258683332 minutes Jul 6 23:35:43.073947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:35:43.080552 systemd[1]: Started sshd@0-172.31.20.250:22-139.178.68.195:44508.service - OpenSSH per-connection server daemon (139.178.68.195:44508). Jul 6 23:35:43.292649 sshd[2128]: Accepted publickey for core from 139.178.68.195 port 44508 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:43.294768 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:43.301999 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:35:43.307975 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:35:43.315804 systemd-logind[1891]: New session 1 of user core. Jul 6 23:35:43.323822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:35:43.330584 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:35:43.335409 (systemd)[2132]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:35:43.338673 systemd-logind[1891]: New session c1 of user core. Jul 6 23:35:43.417830 amazon-ssm-agent[2108]: 2025-07-06 23:35:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:35:43.519062 amazon-ssm-agent[2108]: 2025-07-06 23:35:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2139) started Jul 6 23:35:43.525860 systemd[2132]: Queued start job for default target default.target. Jul 6 23:35:43.532619 systemd[2132]: Created slice app.slice - User Application Slice. Jul 6 23:35:43.532664 systemd[2132]: Reached target paths.target - Paths. Jul 6 23:35:43.532723 systemd[2132]: Reached target timers.target - Timers. Jul 6 23:35:43.535310 systemd[2132]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:35:43.560152 systemd[2132]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:35:43.562030 systemd[2132]: Reached target sockets.target - Sockets. Jul 6 23:35:43.562116 systemd[2132]: Reached target basic.target - Basic System. Jul 6 23:35:43.562178 systemd[2132]: Reached target default.target - Main User Target. Jul 6 23:35:43.562217 systemd[2132]: Startup finished in 215ms. Jul 6 23:35:43.563454 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:35:43.571466 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:35:43.619610 amazon-ssm-agent[2108]: 2025-07-06 23:35:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:35:43.651823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:35:43.653357 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:35:43.654637 systemd[1]: Startup finished in 646ms (kernel) + 7.270s (initrd) + 6.724s (userspace) = 14.641s. Jul 6 23:35:43.662446 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:35:43.731148 systemd[1]: Started sshd@1-172.31.20.250:22-139.178.68.195:44510.service - OpenSSH per-connection server daemon (139.178.68.195:44510). Jul 6 23:35:43.900858 sshd[2165]: Accepted publickey for core from 139.178.68.195 port 44510 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:43.902567 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:43.909240 systemd-logind[1891]: New session 2 of user core. Jul 6 23:35:43.921504 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:35:44.042006 sshd[2171]: Connection closed by 139.178.68.195 port 44510 Jul 6 23:35:44.042559 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:44.045819 systemd[1]: sshd@1-172.31.20.250:22-139.178.68.195:44510.service: Deactivated successfully. Jul 6 23:35:44.047649 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:35:44.050139 systemd-logind[1891]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:35:44.051377 systemd-logind[1891]: Removed session 2. Jul 6 23:35:44.087861 systemd[1]: Started sshd@2-172.31.20.250:22-139.178.68.195:44520.service - OpenSSH per-connection server daemon (139.178.68.195:44520). Jul 6 23:35:44.249953 sshd[2177]: Accepted publickey for core from 139.178.68.195 port 44520 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:44.251820 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:44.259153 systemd-logind[1891]: New session 3 of user core. Jul 6 23:35:44.264451 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:35:44.379151 sshd[2179]: Connection closed by 139.178.68.195 port 44520 Jul 6 23:35:44.379665 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:44.383146 systemd[1]: sshd@2-172.31.20.250:22-139.178.68.195:44520.service: Deactivated successfully. Jul 6 23:35:44.384909 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:35:44.386831 systemd-logind[1891]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:35:44.388105 systemd-logind[1891]: Removed session 3. Jul 6 23:35:44.416602 systemd[1]: Started sshd@3-172.31.20.250:22-139.178.68.195:44536.service - OpenSSH per-connection server daemon (139.178.68.195:44536). Jul 6 23:35:44.507321 kubelet[2158]: E0706 23:35:44.507148 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:35:44.508832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:35:44.508979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:35:44.509250 systemd[1]: kubelet.service: Consumed 1.048s CPU time, 268.2M memory peak. Jul 6 23:35:44.577201 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 44536 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:44.578651 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:44.584002 systemd-logind[1891]: New session 4 of user core. Jul 6 23:35:44.592527 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:35:44.710288 sshd[2189]: Connection closed by 139.178.68.195 port 44536 Jul 6 23:35:44.710854 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:44.714038 systemd[1]: sshd@3-172.31.20.250:22-139.178.68.195:44536.service: Deactivated successfully. Jul 6 23:35:44.716101 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:35:44.717501 systemd-logind[1891]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:35:44.718498 systemd-logind[1891]: Removed session 4. Jul 6 23:35:44.754743 systemd[1]: Started sshd@4-172.31.20.250:22-139.178.68.195:44552.service - OpenSSH per-connection server daemon (139.178.68.195:44552). Jul 6 23:35:44.920523 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 44552 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:44.921645 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:44.926559 systemd-logind[1891]: New session 5 of user core. Jul 6 23:35:44.934518 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:35:44.947995 ntpd[1886]: Listen normally on 7 eth0 [fe80::445:1fff:fe04:d345%2]:123 Jul 6 23:35:44.948384 ntpd[1886]: 6 Jul 23:35:44 ntpd[1886]: Listen normally on 7 eth0 [fe80::445:1fff:fe04:d345%2]:123 Jul 6 23:35:45.068110 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:35:45.068435 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:35:45.083050 sudo[2198]: pam_unix(sudo:session): session closed for user root Jul 6 23:35:45.106453 sshd[2197]: Connection closed by 139.178.68.195 port 44552 Jul 6 23:35:45.107374 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:45.111449 systemd[1]: sshd@4-172.31.20.250:22-139.178.68.195:44552.service: Deactivated successfully. Jul 6 23:35:45.113299 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:35:45.114096 systemd-logind[1891]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:35:45.115560 systemd-logind[1891]: Removed session 5. Jul 6 23:35:45.142657 systemd[1]: Started sshd@5-172.31.20.250:22-139.178.68.195:44560.service - OpenSSH per-connection server daemon (139.178.68.195:44560). Jul 6 23:35:45.307720 sshd[2204]: Accepted publickey for core from 139.178.68.195 port 44560 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:45.308673 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:45.313405 systemd-logind[1891]: New session 6 of user core. Jul 6 23:35:45.320498 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:35:45.420991 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:35:45.421410 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:35:45.425516 sudo[2208]: pam_unix(sudo:session): session closed for user root Jul 6 23:35:45.431415 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:35:45.431826 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:35:45.445711 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:35:45.477488 augenrules[2230]: No rules Jul 6 23:35:45.479023 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:35:45.479336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:35:45.480631 sudo[2207]: pam_unix(sudo:session): session closed for user root Jul 6 23:35:45.503477 sshd[2206]: Connection closed by 139.178.68.195 port 44560 Jul 6 23:35:45.503992 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:45.507169 systemd[1]: sshd@5-172.31.20.250:22-139.178.68.195:44560.service: Deactivated successfully. Jul 6 23:35:45.508986 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:35:45.510329 systemd-logind[1891]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:35:45.511300 systemd-logind[1891]: Removed session 6. Jul 6 23:35:45.542655 systemd[1]: Started sshd@6-172.31.20.250:22-139.178.68.195:44574.service - OpenSSH per-connection server daemon (139.178.68.195:44574). Jul 6 23:35:45.708670 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 44574 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:35:45.710034 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:45.714739 systemd-logind[1891]: New session 7 of user core. Jul 6 23:35:45.722500 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:35:45.822735 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:35:45.823143 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:35:46.487707 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:35:46.489060 (dockerd)[2259]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:35:47.172091 dockerd[2259]: time="2025-07-06T23:35:47.171999577Z" level=info msg="Starting up" Jul 6 23:35:47.546383 dockerd[2259]: time="2025-07-06T23:35:47.546231965Z" level=info msg="Loading containers: start." Jul 6 23:35:47.763292 kernel: Initializing XFRM netlink socket Jul 6 23:35:47.804903 (udev-worker)[2283]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:35:47.865472 systemd-networkd[1827]: docker0: Link UP Jul 6 23:35:47.895148 dockerd[2259]: time="2025-07-06T23:35:47.895098754Z" level=info msg="Loading containers: done." Jul 6 23:35:47.910374 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1004371437-merged.mount: Deactivated successfully. Jul 6 23:35:47.915820 dockerd[2259]: time="2025-07-06T23:35:47.915717905Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:35:47.915999 dockerd[2259]: time="2025-07-06T23:35:47.915827862Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:35:47.915999 dockerd[2259]: time="2025-07-06T23:35:47.915937031Z" level=info msg="Daemon has completed initialization" Jul 6 23:35:48.576480 systemd-resolved[1828]: Clock change detected. Flushing caches. Jul 6 23:35:48.580135 dockerd[2259]: time="2025-07-06T23:35:48.579872768Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:35:48.580023 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:35:49.654123 containerd[1917]: time="2025-07-06T23:35:49.654084125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:35:50.202015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690210552.mount: Deactivated successfully. Jul 6 23:35:51.315004 containerd[1917]: time="2025-07-06T23:35:51.314794214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:51.316265 containerd[1917]: time="2025-07-06T23:35:51.316202648Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:35:51.317270 containerd[1917]: time="2025-07-06T23:35:51.317201611Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:51.320111 containerd[1917]: time="2025-07-06T23:35:51.320053047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:51.321729 containerd[1917]: time="2025-07-06T23:35:51.321272200Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.667146095s" Jul 6 23:35:51.321729 containerd[1917]: time="2025-07-06T23:35:51.321334259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:35:51.326775 containerd[1917]: time="2025-07-06T23:35:51.326738173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:35:52.684834 containerd[1917]: time="2025-07-06T23:35:52.684767143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:52.685860 containerd[1917]: time="2025-07-06T23:35:52.685812155Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:35:52.686956 containerd[1917]: time="2025-07-06T23:35:52.686800641Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:52.689986 containerd[1917]: time="2025-07-06T23:35:52.689912731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:52.691312 containerd[1917]: time="2025-07-06T23:35:52.690902572Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.364044011s" Jul 6 23:35:52.691312 containerd[1917]: time="2025-07-06T23:35:52.690947999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:35:52.691583 containerd[1917]: time="2025-07-06T23:35:52.691561684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:35:53.867179 containerd[1917]: time="2025-07-06T23:35:53.867113915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:53.868146 containerd[1917]: time="2025-07-06T23:35:53.868099328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:35:53.869239 containerd[1917]: time="2025-07-06T23:35:53.869183964Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:53.871781 containerd[1917]: time="2025-07-06T23:35:53.871735449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:53.876503 containerd[1917]: time="2025-07-06T23:35:53.874590775Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.182943218s" Jul 6 23:35:53.876503 containerd[1917]: time="2025-07-06T23:35:53.874626776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:35:53.877033 containerd[1917]: time="2025-07-06T23:35:53.877003863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:35:54.870149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362968385.mount: Deactivated successfully. Jul 6 23:35:55.388065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:35:55.395215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:35:55.470050 containerd[1917]: time="2025-07-06T23:35:55.469383461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:55.473983 containerd[1917]: time="2025-07-06T23:35:55.473908198Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:35:55.479673 containerd[1917]: time="2025-07-06T23:35:55.479594816Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:55.488575 containerd[1917]: time="2025-07-06T23:35:55.487331656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:55.488575 containerd[1917]: time="2025-07-06T23:35:55.487963965Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.610927542s" Jul 6 23:35:55.488575 containerd[1917]: time="2025-07-06T23:35:55.487987211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:35:55.489389 containerd[1917]: time="2025-07-06T23:35:55.489360599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:35:55.627221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:35:55.632019 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:35:55.679881 kubelet[2526]: E0706 23:35:55.679280 2526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:35:55.683797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:35:55.683997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:35:55.684665 systemd[1]: kubelet.service: Consumed 170ms CPU time, 108.7M memory peak. Jul 6 23:35:55.973079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677097192.mount: Deactivated successfully. Jul 6 23:35:56.968709 containerd[1917]: time="2025-07-06T23:35:56.968651784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:56.970085 containerd[1917]: time="2025-07-06T23:35:56.970016246Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:35:56.971261 containerd[1917]: time="2025-07-06T23:35:56.971200468Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:56.974322 containerd[1917]: time="2025-07-06T23:35:56.974093936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:56.978711 containerd[1917]: time="2025-07-06T23:35:56.976436468Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.487031615s" Jul 6 23:35:56.978711 containerd[1917]: time="2025-07-06T23:35:56.976484251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:35:56.979827 containerd[1917]: time="2025-07-06T23:35:56.979789754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:35:57.423893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089664785.mount: Deactivated successfully. Jul 6 23:35:57.438616 containerd[1917]: time="2025-07-06T23:35:57.438549975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:57.440446 containerd[1917]: time="2025-07-06T23:35:57.440393392Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:35:57.442645 containerd[1917]: time="2025-07-06T23:35:57.442588263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:57.446254 containerd[1917]: time="2025-07-06T23:35:57.446194636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:35:57.447320 containerd[1917]: time="2025-07-06T23:35:57.446857277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.940594ms" Jul 6 23:35:57.447320 containerd[1917]: time="2025-07-06T23:35:57.446902201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:35:57.447890 containerd[1917]: time="2025-07-06T23:35:57.447783606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:35:58.015019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799012298.mount: Deactivated successfully. Jul 6 23:36:00.044695 containerd[1917]: time="2025-07-06T23:36:00.044136307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:00.047742 containerd[1917]: time="2025-07-06T23:36:00.047454943Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:00.047742 containerd[1917]: time="2025-07-06T23:36:00.047687088Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:36:00.060371 containerd[1917]: time="2025-07-06T23:36:00.060321528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:00.062299 containerd[1917]: time="2025-07-06T23:36:00.062060535Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.614246273s" Jul 6 23:36:00.062299 containerd[1917]: time="2025-07-06T23:36:00.062111098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:36:03.714692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:03.714957 systemd[1]: kubelet.service: Consumed 170ms CPU time, 108.7M memory peak. Jul 6 23:36:03.721655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:03.756644 systemd[1]: Reload requested from client PID 2671 ('systemctl') (unit session-7.scope)... Jul 6 23:36:03.756662 systemd[1]: Reloading... Jul 6 23:36:03.913341 zram_generator::config[2717]: No configuration found. Jul 6 23:36:04.080275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:36:04.199159 systemd[1]: Reloading finished in 441 ms. Jul 6 23:36:04.247078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:04.255406 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:36:04.259498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:04.260880 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:36:04.261203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:04.261281 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.7M memory peak. Jul 6 23:36:04.267749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:04.469758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:04.476412 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:36:04.535840 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:36:04.536483 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:36:04.536483 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:36:04.539678 kubelet[2783]: I0706 23:36:04.539187 2783 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:36:04.836460 kubelet[2783]: I0706 23:36:04.836316 2783 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:36:04.836460 kubelet[2783]: I0706 23:36:04.836368 2783 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:36:04.837320 kubelet[2783]: I0706 23:36:04.836940 2783 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:36:04.874053 kubelet[2783]: E0706 23:36:04.874009 2783 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:04.874879 kubelet[2783]: I0706 23:36:04.874688 2783 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:36:04.884875 kubelet[2783]: E0706 23:36:04.884829 2783 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:36:04.884875 kubelet[2783]: I0706 23:36:04.884860 2783 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:36:04.890651 kubelet[2783]: I0706 23:36:04.890484 2783 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:36:04.894620 kubelet[2783]: I0706 23:36:04.894582 2783 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:36:04.895190 kubelet[2783]: I0706 23:36:04.895107 2783 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:36:04.895339 kubelet[2783]: I0706 23:36:04.895149 2783 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-250","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:36:04.895450 kubelet[2783]: I0706 23:36:04.895353 2783 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:36:04.895450 kubelet[2783]: I0706 23:36:04.895363 2783 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:36:04.895510 kubelet[2783]: I0706 23:36:04.895461 2783 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:36:04.901764 kubelet[2783]: I0706 23:36:04.901712 2783 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:36:04.901764 kubelet[2783]: I0706 23:36:04.901757 2783 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:36:04.901900 kubelet[2783]: I0706 23:36:04.901794 2783 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:36:04.901900 kubelet[2783]: I0706 23:36:04.901812 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:36:04.905189 kubelet[2783]: W0706 23:36:04.904329 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-250&limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:04.905189 kubelet[2783]: E0706 23:36:04.904403 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-250&limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:04.905351 kubelet[2783]: W0706 23:36:04.905260 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:04.905351 kubelet[2783]: E0706 23:36:04.905315 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:04.905732 kubelet[2783]: I0706 23:36:04.905711 2783 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:36:04.909714 kubelet[2783]: I0706 23:36:04.909682 2783 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:36:04.909820 kubelet[2783]: W0706 23:36:04.909751 2783 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:36:04.911569 kubelet[2783]: I0706 23:36:04.911551 2783 server.go:1274] "Started kubelet" Jul 6 23:36:04.912154 kubelet[2783]: I0706 23:36:04.912121 2783 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:36:04.913046 kubelet[2783]: I0706 23:36:04.912991 2783 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:36:04.916518 kubelet[2783]: I0706 23:36:04.916481 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:36:04.917081 kubelet[2783]: I0706 23:36:04.916833 2783 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:36:04.920410 kubelet[2783]: I0706 23:36:04.919680 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:36:04.921467 kubelet[2783]: E0706 23:36:04.917005 2783 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.250:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-250.184fcdb6c31bbe3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-250,UID:ip-172-31-20-250,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-250,},FirstTimestamp:2025-07-06 23:36:04.91152953 +0000 UTC m=+0.430665665,LastTimestamp:2025-07-06 23:36:04.91152953 +0000 UTC m=+0.430665665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-250,}" Jul 6 23:36:04.921798 kubelet[2783]: I0706 23:36:04.921784 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:36:04.929401 kubelet[2783]: I0706 23:36:04.929375 2783 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:36:04.929626 kubelet[2783]: E0706 23:36:04.929609 2783 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-250\" not found" Jul 6 23:36:04.932057 kubelet[2783]: E0706 23:36:04.932025 2783 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": dial tcp 172.31.20.250:6443: connect: connection refused" interval="200ms" Jul 6 23:36:04.932208 kubelet[2783]: I0706 23:36:04.932084 2783 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:36:04.932480 kubelet[2783]: I0706 23:36:04.932124 2783 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:36:04.932699 kubelet[2783]: I0706 23:36:04.932688 2783 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:36:04.932835 kubelet[2783]: I0706 23:36:04.932823 2783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:36:04.941112 kubelet[2783]: I0706 23:36:04.940800 2783 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:36:04.945788 kubelet[2783]: I0706 23:36:04.945750 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:36:04.947996 kubelet[2783]: I0706 23:36:04.947967 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:36:04.947996 kubelet[2783]: I0706 23:36:04.947996 2783 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:36:04.951894 kubelet[2783]: I0706 23:36:04.948012 2783 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:36:04.951894 kubelet[2783]: E0706 23:36:04.948051 2783 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:36:04.957600 kubelet[2783]: W0706 23:36:04.957440 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:04.957600 kubelet[2783]: E0706 23:36:04.957499 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:04.958039 kubelet[2783]: W0706 23:36:04.957958 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:04.958039 kubelet[2783]: E0706 23:36:04.958002 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:04.971578 kubelet[2783]: E0706 23:36:04.971534 2783 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:36:04.981340 kubelet[2783]: I0706 23:36:04.981086 2783 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:36:04.981340 kubelet[2783]: I0706 23:36:04.981103 2783 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:36:04.981340 kubelet[2783]: I0706 23:36:04.981124 2783 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:36:04.986065 kubelet[2783]: I0706 23:36:04.986039 2783 policy_none.go:49] "None policy: Start" Jul 6 23:36:04.987358 kubelet[2783]: I0706 23:36:04.987331 2783 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:36:04.987358 kubelet[2783]: I0706 23:36:04.987357 2783 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:36:04.995090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:36:05.005925 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:36:05.010061 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:36:05.021795 kubelet[2783]: I0706 23:36:05.021625 2783 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:36:05.022034 kubelet[2783]: I0706 23:36:05.022014 2783 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:36:05.022252 kubelet[2783]: I0706 23:36:05.022033 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:36:05.022315 kubelet[2783]: I0706 23:36:05.022260 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:36:05.024167 kubelet[2783]: E0706 23:36:05.024074 2783 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-250\" not found" Jul 6 23:36:05.062122 systemd[1]: Created slice kubepods-burstable-pod0d3da3b0c5f61a46f5087d9db9512006.slice - libcontainer container kubepods-burstable-pod0d3da3b0c5f61a46f5087d9db9512006.slice. Jul 6 23:36:05.078803 systemd[1]: Created slice kubepods-burstable-pod9add1624b70c91141101864b1ebc4459.slice - libcontainer container kubepods-burstable-pod9add1624b70c91141101864b1ebc4459.slice. Jul 6 23:36:05.084437 systemd[1]: Created slice kubepods-burstable-pod9543478b30b937c981f7c607eeaa579e.slice - libcontainer container kubepods-burstable-pod9543478b30b937c981f7c607eeaa579e.slice. Jul 6 23:36:05.124327 kubelet[2783]: I0706 23:36:05.124146 2783 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:05.126184 kubelet[2783]: E0706 23:36:05.124647 2783 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.250:6443/api/v1/nodes\": dial tcp 172.31.20.250:6443: connect: connection refused" node="ip-172-31-20-250" Jul 6 23:36:05.133318 kubelet[2783]: E0706 23:36:05.133263 2783 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": dial tcp 172.31.20.250:6443: connect: connection refused" interval="400ms" Jul 6 23:36:05.134549 kubelet[2783]: I0706 23:36:05.134509 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:05.134549 kubelet[2783]: I0706 23:36:05.134548 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9add1624b70c91141101864b1ebc4459-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-250\" (UID: \"9add1624b70c91141101864b1ebc4459\") " pod="kube-system/kube-scheduler-ip-172-31-20-250" Jul 6 23:36:05.134844 kubelet[2783]: I0706 23:36:05.134569 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:05.134844 kubelet[2783]: I0706 23:36:05.134587 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:05.134844 kubelet[2783]: I0706 23:36:05.134603 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:05.134844 kubelet[2783]: I0706 23:36:05.134618 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:05.134844 kubelet[2783]: I0706 23:36:05.134635 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:05.134996 kubelet[2783]: I0706 23:36:05.134652 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:05.134996 kubelet[2783]: I0706 23:36:05.134679 2783 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:05.326874 kubelet[2783]: I0706 23:36:05.326822 2783 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:05.327143 kubelet[2783]: E0706 23:36:05.327119 2783 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.250:6443/api/v1/nodes\": dial tcp 172.31.20.250:6443: connect: connection refused" node="ip-172-31-20-250" Jul 6 23:36:05.377931 containerd[1917]: time="2025-07-06T23:36:05.377750631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-250,Uid:0d3da3b0c5f61a46f5087d9db9512006,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:05.383186 containerd[1917]: time="2025-07-06T23:36:05.383137424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-250,Uid:9add1624b70c91141101864b1ebc4459,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:05.387919 containerd[1917]: time="2025-07-06T23:36:05.387887160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-250,Uid:9543478b30b937c981f7c607eeaa579e,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:05.534410 kubelet[2783]: E0706 23:36:05.534344 2783 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": dial tcp 172.31.20.250:6443: connect: connection refused" interval="800ms" Jul 6 23:36:05.729174 kubelet[2783]: I0706 23:36:05.729082 2783 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:05.729640 kubelet[2783]: E0706 23:36:05.729384 2783 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.250:6443/api/v1/nodes\": dial tcp 172.31.20.250:6443: connect: connection refused" node="ip-172-31-20-250" Jul 6 23:36:05.854092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325670862.mount: Deactivated successfully. Jul 6 23:36:05.869689 containerd[1917]: time="2025-07-06T23:36:05.869636570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:05.871742 containerd[1917]: time="2025-07-06T23:36:05.871683669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:36:05.878308 containerd[1917]: time="2025-07-06T23:36:05.878227612Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:05.883108 containerd[1917]: time="2025-07-06T23:36:05.883055730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:05.885066 containerd[1917]: time="2025-07-06T23:36:05.885025066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:05.887147 containerd[1917]: time="2025-07-06T23:36:05.887097967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:36:05.889250 containerd[1917]: time="2025-07-06T23:36:05.889198479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:05.889979 containerd[1917]: time="2025-07-06T23:36:05.889768724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.849907ms" Jul 6 23:36:05.891682 containerd[1917]: time="2025-07-06T23:36:05.891625787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:36:05.906344 containerd[1917]: time="2025-07-06T23:36:05.905658710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.411263ms" Jul 6 23:36:05.906883 containerd[1917]: time="2025-07-06T23:36:05.906853469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.881365ms" Jul 6 23:36:06.009959 kubelet[2783]: W0706 23:36:06.009791 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:06.009959 kubelet[2783]: E0706 23:36:06.009858 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:06.110471 containerd[1917]: time="2025-07-06T23:36:06.110114501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:06.110471 containerd[1917]: time="2025-07-06T23:36:06.110198586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:06.110471 containerd[1917]: time="2025-07-06T23:36:06.110222974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.111760 containerd[1917]: time="2025-07-06T23:36:06.111448315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:06.111760 containerd[1917]: time="2025-07-06T23:36:06.111524471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:06.111760 containerd[1917]: time="2025-07-06T23:36:06.111543964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.112302 containerd[1917]: time="2025-07-06T23:36:06.111664323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.112451 containerd[1917]: time="2025-07-06T23:36:06.112405527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.123034 containerd[1917]: time="2025-07-06T23:36:06.122328197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:06.128022 containerd[1917]: time="2025-07-06T23:36:06.124155978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:06.128022 containerd[1917]: time="2025-07-06T23:36:06.127657155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.128022 containerd[1917]: time="2025-07-06T23:36:06.127792258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:06.134082 kubelet[2783]: W0706 23:36:06.133948 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:06.134082 kubelet[2783]: E0706 23:36:06.134048 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:06.147544 systemd[1]: Started cri-containerd-5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696.scope - libcontainer container 5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696. Jul 6 23:36:06.157163 kubelet[2783]: W0706 23:36:06.155528 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:06.157163 kubelet[2783]: E0706 23:36:06.155604 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:06.163778 systemd[1]: Started cri-containerd-9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce.scope - libcontainer container 9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce. Jul 6 23:36:06.170171 systemd[1]: Started cri-containerd-1f3b48f01ac9fb79733e2ee09d7801d337160fb94633c39ff54d4fef18e3352e.scope - libcontainer container 1f3b48f01ac9fb79733e2ee09d7801d337160fb94633c39ff54d4fef18e3352e. Jul 6 23:36:06.241073 containerd[1917]: time="2025-07-06T23:36:06.240680152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-250,Uid:0d3da3b0c5f61a46f5087d9db9512006,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696\"" Jul 6 23:36:06.253185 containerd[1917]: time="2025-07-06T23:36:06.252937474Z" level=info msg="CreateContainer within sandbox \"5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:36:06.266862 containerd[1917]: time="2025-07-06T23:36:06.266736628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-250,Uid:9543478b30b937c981f7c607eeaa579e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3b48f01ac9fb79733e2ee09d7801d337160fb94633c39ff54d4fef18e3352e\"" Jul 6 23:36:06.279954 containerd[1917]: time="2025-07-06T23:36:06.279921796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-250,Uid:9add1624b70c91141101864b1ebc4459,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce\"" Jul 6 23:36:06.280917 containerd[1917]: time="2025-07-06T23:36:06.280890674Z" level=info msg="CreateContainer within sandbox \"1f3b48f01ac9fb79733e2ee09d7801d337160fb94633c39ff54d4fef18e3352e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:36:06.286965 containerd[1917]: time="2025-07-06T23:36:06.286932583Z" level=info msg="CreateContainer within sandbox \"9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:36:06.314210 containerd[1917]: time="2025-07-06T23:36:06.314159286Z" level=info msg="CreateContainer within sandbox \"5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2\"" Jul 6 23:36:06.317825 containerd[1917]: time="2025-07-06T23:36:06.316677916Z" level=info msg="StartContainer for \"0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2\"" Jul 6 23:36:06.323102 containerd[1917]: time="2025-07-06T23:36:06.323044358Z" level=info msg="CreateContainer within sandbox \"1f3b48f01ac9fb79733e2ee09d7801d337160fb94633c39ff54d4fef18e3352e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cba8d35e3395d87ae750ff6b2b2ebf713cc52346b95283b1f3e5a9f8f229addd\"" Jul 6 23:36:06.323549 containerd[1917]: time="2025-07-06T23:36:06.323523636Z" level=info msg="StartContainer for \"cba8d35e3395d87ae750ff6b2b2ebf713cc52346b95283b1f3e5a9f8f229addd\"" Jul 6 23:36:06.333717 containerd[1917]: time="2025-07-06T23:36:06.333605149Z" level=info msg="CreateContainer within sandbox \"9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb\"" Jul 6 23:36:06.334080 containerd[1917]: time="2025-07-06T23:36:06.334062576Z" level=info msg="StartContainer for \"65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb\"" Jul 6 23:36:06.336608 kubelet[2783]: E0706 23:36:06.336551 2783 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": dial tcp 172.31.20.250:6443: connect: connection refused" interval="1.6s" Jul 6 23:36:06.356500 systemd[1]: Started cri-containerd-0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2.scope - libcontainer container 0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2. Jul 6 23:36:06.372922 systemd[1]: Started cri-containerd-cba8d35e3395d87ae750ff6b2b2ebf713cc52346b95283b1f3e5a9f8f229addd.scope - libcontainer container cba8d35e3395d87ae750ff6b2b2ebf713cc52346b95283b1f3e5a9f8f229addd. Jul 6 23:36:06.386536 systemd[1]: Started cri-containerd-65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb.scope - libcontainer container 65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb. Jul 6 23:36:06.407853 kubelet[2783]: W0706 23:36:06.407699 2783 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-250&limit=500&resourceVersion=0": dial tcp 172.31.20.250:6443: connect: connection refused Jul 6 23:36:06.407853 kubelet[2783]: E0706 23:36:06.407864 2783 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-250&limit=500&resourceVersion=0\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:06.452198 containerd[1917]: time="2025-07-06T23:36:06.451374415Z" level=info msg="StartContainer for \"0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2\" returns successfully" Jul 6 23:36:06.458622 containerd[1917]: time="2025-07-06T23:36:06.458584835Z" level=info msg="StartContainer for \"cba8d35e3395d87ae750ff6b2b2ebf713cc52346b95283b1f3e5a9f8f229addd\" returns successfully" Jul 6 23:36:06.475541 containerd[1917]: time="2025-07-06T23:36:06.475490032Z" level=info msg="StartContainer for \"65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb\" returns successfully" Jul 6 23:36:06.531715 kubelet[2783]: I0706 23:36:06.531589 2783 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:06.532393 kubelet[2783]: E0706 23:36:06.532260 2783 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.250:6443/api/v1/nodes\": dial tcp 172.31.20.250:6443: connect: connection refused" node="ip-172-31-20-250" Jul 6 23:36:06.940813 kubelet[2783]: E0706 23:36:06.940765 2783 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.250:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:36:08.136318 kubelet[2783]: I0706 23:36:08.135495 2783 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:09.084414 kubelet[2783]: E0706 23:36:09.084334 2783 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-250\" not found" node="ip-172-31-20-250" Jul 6 23:36:09.112120 kubelet[2783]: I0706 23:36:09.111297 2783 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-250" Jul 6 23:36:09.112120 kubelet[2783]: E0706 23:36:09.111339 2783 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-250\": node \"ip-172-31-20-250\" not found" Jul 6 23:36:09.909746 kubelet[2783]: I0706 23:36:09.909705 2783 apiserver.go:52] "Watching apiserver" Jul 6 23:36:09.932618 kubelet[2783]: I0706 23:36:09.932551 2783 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:36:11.440739 systemd[1]: Reload requested from client PID 3057 ('systemctl') (unit session-7.scope)... Jul 6 23:36:11.440758 systemd[1]: Reloading... Jul 6 23:36:11.561341 zram_generator::config[3104]: No configuration found. Jul 6 23:36:11.701133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:36:11.840709 systemd[1]: Reloading finished in 399 ms. Jul 6 23:36:11.868207 kubelet[2783]: I0706 23:36:11.868172 2783 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:36:11.869589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:11.877792 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:36:11.878034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:11.878094 systemd[1]: kubelet.service: Consumed 833ms CPU time, 127.6M memory peak. Jul 6 23:36:11.884639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:12.128273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:12.138863 (kubelet)[3163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:36:12.171858 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:36:12.234140 kubelet[3163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:36:12.234140 kubelet[3163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:36:12.234140 kubelet[3163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:36:12.234546 kubelet[3163]: I0706 23:36:12.234197 3163 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:36:12.243949 kubelet[3163]: I0706 23:36:12.243909 3163 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:36:12.244878 kubelet[3163]: I0706 23:36:12.244096 3163 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:36:12.244878 kubelet[3163]: I0706 23:36:12.244499 3163 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:36:12.246411 kubelet[3163]: I0706 23:36:12.246382 3163 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:36:12.257681 kubelet[3163]: I0706 23:36:12.257509 3163 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:36:12.261041 kubelet[3163]: E0706 23:36:12.260952 3163 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:36:12.261337 kubelet[3163]: I0706 23:36:12.261190 3163 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:36:12.263687 kubelet[3163]: I0706 23:36:12.263632 3163 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:36:12.263781 kubelet[3163]: I0706 23:36:12.263755 3163 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:36:12.263903 kubelet[3163]: I0706 23:36:12.263873 3163 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:36:12.264079 kubelet[3163]: I0706 23:36:12.263898 3163 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-250","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:36:12.264079 kubelet[3163]: I0706 23:36:12.264067 3163 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:36:12.264079 kubelet[3163]: I0706 23:36:12.264076 3163 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:36:12.264301 kubelet[3163]: I0706 23:36:12.264099 3163 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:36:12.264301 kubelet[3163]: I0706 23:36:12.264203 3163 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:36:12.264301 kubelet[3163]: I0706 23:36:12.264213 3163 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:36:12.264383 kubelet[3163]: I0706 23:36:12.264324 3163 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:36:12.264383 kubelet[3163]: I0706 23:36:12.264334 3163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:36:12.275308 kubelet[3163]: I0706 23:36:12.274795 3163 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:36:12.275308 kubelet[3163]: I0706 23:36:12.275223 3163 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:36:12.276237 kubelet[3163]: I0706 23:36:12.276212 3163 server.go:1274] "Started kubelet" Jul 6 23:36:12.284316 kubelet[3163]: I0706 23:36:12.283384 3163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:36:12.295403 kubelet[3163]: I0706 23:36:12.295355 3163 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:36:12.299103 kubelet[3163]: I0706 23:36:12.299074 3163 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:36:12.307326 kubelet[3163]: I0706 23:36:12.306165 3163 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:36:12.307326 kubelet[3163]: I0706 23:36:12.306338 3163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:36:12.310566 kubelet[3163]: I0706 23:36:12.301184 3163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:36:12.312392 kubelet[3163]: I0706 23:36:12.302462 3163 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:36:12.312617 kubelet[3163]: I0706 23:36:12.302482 3163 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:36:12.312852 kubelet[3163]: E0706 23:36:12.302731 3163 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-250\" not found" Jul 6 23:36:12.313017 kubelet[3163]: I0706 23:36:12.312997 3163 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:36:12.315225 kubelet[3163]: I0706 23:36:12.300547 3163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:36:12.319314 kubelet[3163]: I0706 23:36:12.315320 3163 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:36:12.326623 kubelet[3163]: I0706 23:36:12.326582 3163 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:36:12.326878 kubelet[3163]: E0706 23:36:12.326854 3163 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:36:12.337511 kubelet[3163]: I0706 23:36:12.337471 3163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:36:12.339926 kubelet[3163]: I0706 23:36:12.339868 3163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:36:12.339926 kubelet[3163]: I0706 23:36:12.339900 3163 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:36:12.339926 kubelet[3163]: I0706 23:36:12.339928 3163 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:36:12.341168 kubelet[3163]: E0706 23:36:12.339978 3163 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:36:12.383590 kubelet[3163]: I0706 23:36:12.383477 3163 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:36:12.383590 kubelet[3163]: I0706 23:36:12.383495 3163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:36:12.383590 kubelet[3163]: I0706 23:36:12.383517 3163 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:36:12.384779 kubelet[3163]: I0706 23:36:12.384646 3163 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:36:12.384779 kubelet[3163]: I0706 23:36:12.384668 3163 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:36:12.384779 kubelet[3163]: I0706 23:36:12.384693 3163 policy_none.go:49] "None policy: Start" Jul 6 23:36:12.386222 kubelet[3163]: I0706 23:36:12.385430 3163 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:36:12.386222 kubelet[3163]: I0706 23:36:12.385456 3163 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:36:12.386222 kubelet[3163]: I0706 23:36:12.385635 3163 state_mem.go:75] "Updated machine memory state" Jul 6 23:36:12.391059 kubelet[3163]: I0706 23:36:12.391026 3163 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:36:12.391437 kubelet[3163]: I0706 23:36:12.391224 3163 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:36:12.391437 kubelet[3163]: I0706 23:36:12.391241 3163 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:36:12.392805 kubelet[3163]: I0706 23:36:12.391772 3163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:36:12.454497 kubelet[3163]: E0706 23:36:12.454239 3163 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-250\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:12.467554 sudo[3200]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:36:12.468014 sudo[3200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:36:12.499253 kubelet[3163]: I0706 23:36:12.499190 3163 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-250" Jul 6 23:36:12.514922 kubelet[3163]: I0706 23:36:12.514886 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9add1624b70c91141101864b1ebc4459-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-250\" (UID: \"9add1624b70c91141101864b1ebc4459\") " pod="kube-system/kube-scheduler-ip-172-31-20-250" Jul 6 23:36:12.516467 kubelet[3163]: I0706 23:36:12.516441 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:12.516663 kubelet[3163]: I0706 23:36:12.516642 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:12.517028 kubelet[3163]: I0706 23:36:12.516789 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:12.517028 kubelet[3163]: I0706 23:36:12.516831 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:12.517028 kubelet[3163]: I0706 23:36:12.516854 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:12.517028 kubelet[3163]: I0706 23:36:12.516877 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9543478b30b937c981f7c607eeaa579e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-250\" (UID: \"9543478b30b937c981f7c607eeaa579e\") " pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:12.517028 kubelet[3163]: I0706 23:36:12.516908 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:12.518338 kubelet[3163]: I0706 23:36:12.516932 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d3da3b0c5f61a46f5087d9db9512006-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-250\" (UID: \"0d3da3b0c5f61a46f5087d9db9512006\") " pod="kube-system/kube-controller-manager-ip-172-31-20-250" Jul 6 23:36:12.519808 kubelet[3163]: I0706 23:36:12.519763 3163 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-20-250" Jul 6 23:36:12.519937 kubelet[3163]: I0706 23:36:12.519836 3163 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-250" Jul 6 23:36:13.144800 sudo[3200]: pam_unix(sudo:session): session closed for user root Jul 6 23:36:13.271662 kubelet[3163]: I0706 23:36:13.271605 3163 apiserver.go:52] "Watching apiserver" Jul 6 23:36:13.312933 kubelet[3163]: I0706 23:36:13.312872 3163 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:36:13.377500 kubelet[3163]: E0706 23:36:13.377361 3163 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-250\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-250" Jul 6 23:36:13.470212 kubelet[3163]: I0706 23:36:13.470145 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-250" podStartSLOduration=1.470122874 podStartE2EDuration="1.470122874s" podCreationTimestamp="2025-07-06 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:13.43571651 +0000 UTC m=+1.289339347" watchObservedRunningTime="2025-07-06 23:36:13.470122874 +0000 UTC m=+1.323745715" Jul 6 23:36:13.470439 kubelet[3163]: I0706 23:36:13.470262 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-250" podStartSLOduration=3.470254008 podStartE2EDuration="3.470254008s" podCreationTimestamp="2025-07-06 23:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:13.468118014 +0000 UTC m=+1.321740855" watchObservedRunningTime="2025-07-06 23:36:13.470254008 +0000 UTC m=+1.323876846" Jul 6 23:36:13.508969 kubelet[3163]: I0706 23:36:13.508902 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-250" podStartSLOduration=1.508880585 podStartE2EDuration="1.508880585s" podCreationTimestamp="2025-07-06 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:13.508680856 +0000 UTC m=+1.362303695" watchObservedRunningTime="2025-07-06 23:36:13.508880585 +0000 UTC m=+1.362503415" Jul 6 23:36:15.291851 sudo[2242]: pam_unix(sudo:session): session closed for user root Jul 6 23:36:15.315114 sshd[2241]: Connection closed by 139.178.68.195 port 44574 Jul 6 23:36:15.316135 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:15.321189 systemd-logind[1891]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:36:15.321534 systemd[1]: sshd@6-172.31.20.250:22-139.178.68.195:44574.service: Deactivated successfully. Jul 6 23:36:15.325233 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:36:15.326050 systemd[1]: session-7.scope: Consumed 5.271s CPU time, 205.3M memory peak. Jul 6 23:36:15.327770 systemd-logind[1891]: Removed session 7. Jul 6 23:36:17.006248 kubelet[3163]: I0706 23:36:17.006104 3163 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:36:17.006757 containerd[1917]: time="2025-07-06T23:36:17.006469639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:36:17.006981 kubelet[3163]: I0706 23:36:17.006784 3163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:36:17.408583 systemd[1]: Created slice kubepods-besteffort-podd6564d8e_628d_4b84_8820_e168f57d8df5.slice - libcontainer container kubepods-besteffort-podd6564d8e_628d_4b84_8820_e168f57d8df5.slice. Jul 6 23:36:17.424158 systemd[1]: Created slice kubepods-burstable-pod669550b1_e723_4136_91ac_bec2a59c905f.slice - libcontainer container kubepods-burstable-pod669550b1_e723_4136_91ac_bec2a59c905f.slice. Jul 6 23:36:17.449837 kubelet[3163]: I0706 23:36:17.449804 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6564d8e-628d-4b84-8820-e168f57d8df5-lib-modules\") pod \"kube-proxy-5nz4p\" (UID: \"d6564d8e-628d-4b84-8820-e168f57d8df5\") " pod="kube-system/kube-proxy-5nz4p" Jul 6 23:36:17.449837 kubelet[3163]: I0706 23:36:17.449839 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cni-path\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449860 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-lib-modules\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449878 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/669550b1-e723-4136-91ac-bec2a59c905f-clustermesh-secrets\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449895 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/669550b1-e723-4136-91ac-bec2a59c905f-cilium-config-path\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449910 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-cgroup\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449924 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-etc-cni-netd\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450008 kubelet[3163]: I0706 23:36:17.449938 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-xtables-lock\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.449967 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-net\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.449981 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-hubble-tls\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.449995 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6564d8e-628d-4b84-8820-e168f57d8df5-kube-proxy\") pod \"kube-proxy-5nz4p\" (UID: \"d6564d8e-628d-4b84-8820-e168f57d8df5\") " pod="kube-system/kube-proxy-5nz4p" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.450010 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-run\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.450027 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-bpf-maps\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450158 kubelet[3163]: I0706 23:36:17.450041 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-hostproc\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450335 kubelet[3163]: I0706 23:36:17.450055 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgps\" (UniqueName: \"kubernetes.io/projected/d6564d8e-628d-4b84-8820-e168f57d8df5-kube-api-access-bsgps\") pod \"kube-proxy-5nz4p\" (UID: \"d6564d8e-628d-4b84-8820-e168f57d8df5\") " pod="kube-system/kube-proxy-5nz4p" Jul 6 23:36:17.450335 kubelet[3163]: I0706 23:36:17.450074 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-kernel\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.450335 kubelet[3163]: I0706 23:36:17.450089 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6564d8e-628d-4b84-8820-e168f57d8df5-xtables-lock\") pod \"kube-proxy-5nz4p\" (UID: \"d6564d8e-628d-4b84-8820-e168f57d8df5\") " pod="kube-system/kube-proxy-5nz4p" Jul 6 23:36:17.450335 kubelet[3163]: I0706 23:36:17.450106 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspml\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-kube-api-access-sspml\") pod \"cilium-djvlp\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " pod="kube-system/cilium-djvlp" Jul 6 23:36:17.719858 containerd[1917]: time="2025-07-06T23:36:17.718809553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nz4p,Uid:d6564d8e-628d-4b84-8820-e168f57d8df5,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:17.728130 containerd[1917]: time="2025-07-06T23:36:17.728085569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-djvlp,Uid:669550b1-e723-4136-91ac-bec2a59c905f,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:17.780763 containerd[1917]: time="2025-07-06T23:36:17.780634459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:17.780763 containerd[1917]: time="2025-07-06T23:36:17.780715724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:17.780763 containerd[1917]: time="2025-07-06T23:36:17.780731068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:17.781763 containerd[1917]: time="2025-07-06T23:36:17.781487665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:17.790059 containerd[1917]: time="2025-07-06T23:36:17.789968171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:17.790497 containerd[1917]: time="2025-07-06T23:36:17.790267754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:17.790497 containerd[1917]: time="2025-07-06T23:36:17.790327273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:17.790497 containerd[1917]: time="2025-07-06T23:36:17.790453942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:17.809670 systemd[1]: Started cri-containerd-d5411a4c31d9cc8b7d847956c2cb6c213a4e53101b7b0bace41595f8fedfb414.scope - libcontainer container d5411a4c31d9cc8b7d847956c2cb6c213a4e53101b7b0bace41595f8fedfb414. Jul 6 23:36:17.814832 systemd[1]: Started cri-containerd-0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55.scope - libcontainer container 0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55. Jul 6 23:36:17.872924 containerd[1917]: time="2025-07-06T23:36:17.872879278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-djvlp,Uid:669550b1-e723-4136-91ac-bec2a59c905f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\"" Jul 6 23:36:17.881438 containerd[1917]: time="2025-07-06T23:36:17.881388325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:36:17.925713 containerd[1917]: time="2025-07-06T23:36:17.925531557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nz4p,Uid:d6564d8e-628d-4b84-8820-e168f57d8df5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5411a4c31d9cc8b7d847956c2cb6c213a4e53101b7b0bace41595f8fedfb414\"" Jul 6 23:36:17.930013 containerd[1917]: time="2025-07-06T23:36:17.929870624Z" level=info msg="CreateContainer within sandbox \"d5411a4c31d9cc8b7d847956c2cb6c213a4e53101b7b0bace41595f8fedfb414\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:36:17.961226 containerd[1917]: time="2025-07-06T23:36:17.961105906Z" level=info msg="CreateContainer within sandbox \"d5411a4c31d9cc8b7d847956c2cb6c213a4e53101b7b0bace41595f8fedfb414\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"324e82ad5424f7477e670a5a7005bdafc73b8c7cb7a86f844814c7bad8b60b32\"" Jul 6 23:36:17.963345 containerd[1917]: time="2025-07-06T23:36:17.962014982Z" level=info msg="StartContainer for \"324e82ad5424f7477e670a5a7005bdafc73b8c7cb7a86f844814c7bad8b60b32\"" Jul 6 23:36:17.998693 systemd[1]: Started cri-containerd-324e82ad5424f7477e670a5a7005bdafc73b8c7cb7a86f844814c7bad8b60b32.scope - libcontainer container 324e82ad5424f7477e670a5a7005bdafc73b8c7cb7a86f844814c7bad8b60b32. Jul 6 23:36:18.042282 containerd[1917]: time="2025-07-06T23:36:18.042233292Z" level=info msg="StartContainer for \"324e82ad5424f7477e670a5a7005bdafc73b8c7cb7a86f844814c7bad8b60b32\" returns successfully" Jul 6 23:36:18.193525 systemd[1]: Created slice kubepods-besteffort-pod82545500_2cf2_4c39_8e58_72493ed6fd02.slice - libcontainer container kubepods-besteffort-pod82545500_2cf2_4c39_8e58_72493ed6fd02.slice. Jul 6 23:36:18.259228 kubelet[3163]: I0706 23:36:18.259087 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82545500-2cf2-4c39-8e58-72493ed6fd02-cilium-config-path\") pod \"cilium-operator-5d85765b45-qg28h\" (UID: \"82545500-2cf2-4c39-8e58-72493ed6fd02\") " pod="kube-system/cilium-operator-5d85765b45-qg28h" Jul 6 23:36:18.259228 kubelet[3163]: I0706 23:36:18.259143 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26f56\" (UniqueName: \"kubernetes.io/projected/82545500-2cf2-4c39-8e58-72493ed6fd02-kube-api-access-26f56\") pod \"cilium-operator-5d85765b45-qg28h\" (UID: \"82545500-2cf2-4c39-8e58-72493ed6fd02\") " pod="kube-system/cilium-operator-5d85765b45-qg28h" Jul 6 23:36:18.499144 containerd[1917]: time="2025-07-06T23:36:18.499084808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qg28h,Uid:82545500-2cf2-4c39-8e58-72493ed6fd02,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:18.545093 containerd[1917]: time="2025-07-06T23:36:18.544731204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:18.545093 containerd[1917]: time="2025-07-06T23:36:18.544818176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:18.545979 containerd[1917]: time="2025-07-06T23:36:18.545710061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:18.546113 containerd[1917]: time="2025-07-06T23:36:18.545918551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:18.598500 systemd[1]: Started cri-containerd-171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2.scope - libcontainer container 171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2. Jul 6 23:36:18.656949 containerd[1917]: time="2025-07-06T23:36:18.656902987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qg28h,Uid:82545500-2cf2-4c39-8e58-72493ed6fd02,Namespace:kube-system,Attempt:0,} returns sandbox id \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\"" Jul 6 23:36:20.157525 kubelet[3163]: I0706 23:36:20.157379 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5nz4p" podStartSLOduration=3.153560883 podStartE2EDuration="3.153560883s" podCreationTimestamp="2025-07-06 23:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:18.386462411 +0000 UTC m=+6.240085248" watchObservedRunningTime="2025-07-06 23:36:20.153560883 +0000 UTC m=+8.007183726" Jul 6 23:36:22.863852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917985329.mount: Deactivated successfully. Jul 6 23:36:25.402810 containerd[1917]: time="2025-07-06T23:36:25.402752079Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:25.403975 containerd[1917]: time="2025-07-06T23:36:25.403922289Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:36:25.404959 containerd[1917]: time="2025-07-06T23:36:25.404905791Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:25.407102 containerd[1917]: time="2025-07-06T23:36:25.407054872Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.525621609s" Jul 6 23:36:25.407102 containerd[1917]: time="2025-07-06T23:36:25.407097578Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:36:25.409718 containerd[1917]: time="2025-07-06T23:36:25.409678757Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:36:25.411477 containerd[1917]: time="2025-07-06T23:36:25.411444933Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:36:25.477008 containerd[1917]: time="2025-07-06T23:36:25.476951116Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\"" Jul 6 23:36:25.478646 containerd[1917]: time="2025-07-06T23:36:25.477809931Z" level=info msg="StartContainer for \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\"" Jul 6 23:36:25.605538 systemd[1]: Started cri-containerd-e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4.scope - libcontainer container e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4. Jul 6 23:36:25.639536 containerd[1917]: time="2025-07-06T23:36:25.639489290Z" level=info msg="StartContainer for \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\" returns successfully" Jul 6 23:36:25.653479 systemd[1]: cri-containerd-e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4.scope: Deactivated successfully. Jul 6 23:36:25.783834 containerd[1917]: time="2025-07-06T23:36:25.769235775Z" level=info msg="shim disconnected" id=e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4 namespace=k8s.io Jul 6 23:36:25.783834 containerd[1917]: time="2025-07-06T23:36:25.783833187Z" level=warning msg="cleaning up after shim disconnected" id=e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4 namespace=k8s.io Jul 6 23:36:25.784061 containerd[1917]: time="2025-07-06T23:36:25.783848193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:25.801390 containerd[1917]: time="2025-07-06T23:36:25.801316105Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:36:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:36:26.424523 containerd[1917]: time="2025-07-06T23:36:26.424482209Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:36:26.469754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4-rootfs.mount: Deactivated successfully. Jul 6 23:36:26.498873 containerd[1917]: time="2025-07-06T23:36:26.498830139Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\"" Jul 6 23:36:26.503573 containerd[1917]: time="2025-07-06T23:36:26.503423628Z" level=info msg="StartContainer for \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\"" Jul 6 23:36:26.563451 systemd[1]: Started cri-containerd-68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8.scope - libcontainer container 68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8. Jul 6 23:36:26.649224 containerd[1917]: time="2025-07-06T23:36:26.643795801Z" level=info msg="StartContainer for \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\" returns successfully" Jul 6 23:36:26.649398 update_engine[1894]: I20250706 23:36:26.642890 1894 update_attempter.cc:509] Updating boot flags... Jul 6 23:36:26.683955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:36:26.685473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:36:26.686197 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:36:26.697652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:36:26.705426 systemd[1]: cri-containerd-68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8.scope: Deactivated successfully. Jul 6 23:36:26.792420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:36:26.840443 containerd[1917]: time="2025-07-06T23:36:26.838537761Z" level=info msg="shim disconnected" id=68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8 namespace=k8s.io Jul 6 23:36:26.840443 containerd[1917]: time="2025-07-06T23:36:26.838605046Z" level=warning msg="cleaning up after shim disconnected" id=68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8 namespace=k8s.io Jul 6 23:36:26.840443 containerd[1917]: time="2025-07-06T23:36:26.838620620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:26.876534 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3698) Jul 6 23:36:26.902451 containerd[1917]: time="2025-07-06T23:36:26.901250652Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:36:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:36:27.236314 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3694) Jul 6 23:36:27.456380 containerd[1917]: time="2025-07-06T23:36:27.456337882Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:36:27.475813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537538969.mount: Deactivated successfully. Jul 6 23:36:27.475980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8-rootfs.mount: Deactivated successfully. Jul 6 23:36:27.609352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3694) Jul 6 23:36:27.636802 containerd[1917]: time="2025-07-06T23:36:27.635469215Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\"" Jul 6 23:36:27.639264 containerd[1917]: time="2025-07-06T23:36:27.639224464Z" level=info msg="StartContainer for \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\"" Jul 6 23:36:27.812526 systemd[1]: Started cri-containerd-8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720.scope - libcontainer container 8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720. Jul 6 23:36:27.959784 containerd[1917]: time="2025-07-06T23:36:27.959747015Z" level=info msg="StartContainer for \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\" returns successfully" Jul 6 23:36:27.972455 systemd[1]: cri-containerd-8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720.scope: Deactivated successfully. Jul 6 23:36:27.972765 systemd[1]: cri-containerd-8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720.scope: Consumed 30ms CPU time, 2.8M memory peak, 1M read from disk. Jul 6 23:36:28.096197 containerd[1917]: time="2025-07-06T23:36:28.095978493Z" level=info msg="shim disconnected" id=8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720 namespace=k8s.io Jul 6 23:36:28.096197 containerd[1917]: time="2025-07-06T23:36:28.096057756Z" level=warning msg="cleaning up after shim disconnected" id=8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720 namespace=k8s.io Jul 6 23:36:28.096197 containerd[1917]: time="2025-07-06T23:36:28.096070061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:28.126226 containerd[1917]: time="2025-07-06T23:36:28.126161170Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:36:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:36:28.296317 containerd[1917]: time="2025-07-06T23:36:28.295827837Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:28.297804 containerd[1917]: time="2025-07-06T23:36:28.297680627Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:36:28.301451 containerd[1917]: time="2025-07-06T23:36:28.301406396Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:28.303340 containerd[1917]: time="2025-07-06T23:36:28.303148575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.893427815s" Jul 6 23:36:28.303340 containerd[1917]: time="2025-07-06T23:36:28.303202046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:36:28.306235 containerd[1917]: time="2025-07-06T23:36:28.306192802Z" level=info msg="CreateContainer within sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:36:28.329088 containerd[1917]: time="2025-07-06T23:36:28.329038283Z" level=info msg="CreateContainer within sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\"" Jul 6 23:36:28.329832 containerd[1917]: time="2025-07-06T23:36:28.329705302Z" level=info msg="StartContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\"" Jul 6 23:36:28.359141 systemd[1]: Started cri-containerd-461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d.scope - libcontainer container 461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d. Jul 6 23:36:28.407044 containerd[1917]: time="2025-07-06T23:36:28.406897867Z" level=info msg="StartContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" returns successfully" Jul 6 23:36:28.455006 containerd[1917]: time="2025-07-06T23:36:28.454956771Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:36:28.475327 systemd[1]: run-containerd-runc-k8s.io-8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720-runc.f2Ve6W.mount: Deactivated successfully. Jul 6 23:36:28.475641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720-rootfs.mount: Deactivated successfully. Jul 6 23:36:28.494009 containerd[1917]: time="2025-07-06T23:36:28.493846693Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\"" Jul 6 23:36:28.495447 containerd[1917]: time="2025-07-06T23:36:28.494900500Z" level=info msg="StartContainer for \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\"" Jul 6 23:36:28.589866 systemd[1]: Started cri-containerd-8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1.scope - libcontainer container 8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1. Jul 6 23:36:28.629016 systemd[1]: cri-containerd-8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1.scope: Deactivated successfully. Jul 6 23:36:28.642263 containerd[1917]: time="2025-07-06T23:36:28.641516348Z" level=info msg="StartContainer for \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\" returns successfully" Jul 6 23:36:28.691140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1-rootfs.mount: Deactivated successfully. Jul 6 23:36:28.753418 containerd[1917]: time="2025-07-06T23:36:28.753009850Z" level=info msg="shim disconnected" id=8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1 namespace=k8s.io Jul 6 23:36:28.753418 containerd[1917]: time="2025-07-06T23:36:28.753077186Z" level=warning msg="cleaning up after shim disconnected" id=8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1 namespace=k8s.io Jul 6 23:36:28.753418 containerd[1917]: time="2025-07-06T23:36:28.753091209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:29.470268 containerd[1917]: time="2025-07-06T23:36:29.470222153Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:36:29.500630 containerd[1917]: time="2025-07-06T23:36:29.500580173Z" level=info msg="CreateContainer within sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\"" Jul 6 23:36:29.504323 containerd[1917]: time="2025-07-06T23:36:29.503599911Z" level=info msg="StartContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\"" Jul 6 23:36:29.578557 systemd[1]: Started cri-containerd-750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714.scope - libcontainer container 750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714. Jul 6 23:36:29.585607 kubelet[3163]: I0706 23:36:29.585532 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qg28h" podStartSLOduration=1.939638341 podStartE2EDuration="11.585508439s" podCreationTimestamp="2025-07-06 23:36:18 +0000 UTC" firstStartedPulling="2025-07-06 23:36:18.658468157 +0000 UTC m=+6.512090976" lastFinishedPulling="2025-07-06 23:36:28.304338239 +0000 UTC m=+16.157961074" observedRunningTime="2025-07-06 23:36:28.526994417 +0000 UTC m=+16.380617253" watchObservedRunningTime="2025-07-06 23:36:29.585508439 +0000 UTC m=+17.439131278" Jul 6 23:36:29.676383 containerd[1917]: time="2025-07-06T23:36:29.676280284Z" level=info msg="StartContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" returns successfully" Jul 6 23:36:29.897824 kubelet[3163]: I0706 23:36:29.897406 3163 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:36:29.947887 systemd[1]: Created slice kubepods-burstable-podfdd1681b_974c_4bb8_a76e_3abc9b042f38.slice - libcontainer container kubepods-burstable-podfdd1681b_974c_4bb8_a76e_3abc9b042f38.slice. Jul 6 23:36:29.957376 systemd[1]: Created slice kubepods-burstable-pod5c62e093_86eb_477a_8e41_c98b4994d09b.slice - libcontainer container kubepods-burstable-pod5c62e093_86eb_477a_8e41_c98b4994d09b.slice. Jul 6 23:36:30.050720 kubelet[3163]: I0706 23:36:30.050535 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh2zs\" (UniqueName: \"kubernetes.io/projected/fdd1681b-974c-4bb8-a76e-3abc9b042f38-kube-api-access-bh2zs\") pod \"coredns-7c65d6cfc9-hjdkc\" (UID: \"fdd1681b-974c-4bb8-a76e-3abc9b042f38\") " pod="kube-system/coredns-7c65d6cfc9-hjdkc" Jul 6 23:36:30.050720 kubelet[3163]: I0706 23:36:30.050582 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdd1681b-974c-4bb8-a76e-3abc9b042f38-config-volume\") pod \"coredns-7c65d6cfc9-hjdkc\" (UID: \"fdd1681b-974c-4bb8-a76e-3abc9b042f38\") " pod="kube-system/coredns-7c65d6cfc9-hjdkc" Jul 6 23:36:30.050720 kubelet[3163]: I0706 23:36:30.050602 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b495v\" (UniqueName: \"kubernetes.io/projected/5c62e093-86eb-477a-8e41-c98b4994d09b-kube-api-access-b495v\") pod \"coredns-7c65d6cfc9-4hwld\" (UID: \"5c62e093-86eb-477a-8e41-c98b4994d09b\") " pod="kube-system/coredns-7c65d6cfc9-4hwld" Jul 6 23:36:30.050720 kubelet[3163]: I0706 23:36:30.050621 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c62e093-86eb-477a-8e41-c98b4994d09b-config-volume\") pod \"coredns-7c65d6cfc9-4hwld\" (UID: \"5c62e093-86eb-477a-8e41-c98b4994d09b\") " pod="kube-system/coredns-7c65d6cfc9-4hwld" Jul 6 23:36:30.255178 containerd[1917]: time="2025-07-06T23:36:30.255116105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hjdkc,Uid:fdd1681b-974c-4bb8-a76e-3abc9b042f38,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:30.262183 containerd[1917]: time="2025-07-06T23:36:30.261997927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4hwld,Uid:5c62e093-86eb-477a-8e41-c98b4994d09b,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:32.370742 (udev-worker)[3707]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:36:32.370899 systemd-networkd[1827]: cilium_host: Link UP Jul 6 23:36:32.371891 (udev-worker)[3694]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:36:32.373842 systemd-networkd[1827]: cilium_net: Link UP Jul 6 23:36:32.374431 systemd-networkd[1827]: cilium_net: Gained carrier Jul 6 23:36:32.375711 systemd-networkd[1827]: cilium_host: Gained carrier Jul 6 23:36:32.516367 systemd-networkd[1827]: cilium_vxlan: Link UP Jul 6 23:36:32.516377 systemd-networkd[1827]: cilium_vxlan: Gained carrier Jul 6 23:36:32.997273 systemd-networkd[1827]: cilium_net: Gained IPv6LL Jul 6 23:36:33.059339 kernel: NET: Registered PF_ALG protocol family Jul 6 23:36:33.380488 systemd-networkd[1827]: cilium_host: Gained IPv6LL Jul 6 23:36:33.780722 systemd-networkd[1827]: lxc_health: Link UP Jul 6 23:36:33.780995 systemd-networkd[1827]: lxc_health: Gained carrier Jul 6 23:36:33.917866 (udev-worker)[3699]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:36:33.934145 kernel: eth0: renamed from tmp35c87 Jul 6 23:36:33.941420 systemd-networkd[1827]: lxc95d7ef57d4bc: Link UP Jul 6 23:36:33.943726 systemd-networkd[1827]: lxc95d7ef57d4bc: Gained carrier Jul 6 23:36:34.410331 kernel: eth0: renamed from tmp31a34 Jul 6 23:36:34.418941 systemd-networkd[1827]: lxc4de4f1d300c4: Link UP Jul 6 23:36:34.419791 systemd-networkd[1827]: lxc4de4f1d300c4: Gained carrier Jul 6 23:36:34.471378 systemd-networkd[1827]: cilium_vxlan: Gained IPv6LL Jul 6 23:36:35.172569 systemd-networkd[1827]: lxc_health: Gained IPv6LL Jul 6 23:36:35.684572 systemd-networkd[1827]: lxc4de4f1d300c4: Gained IPv6LL Jul 6 23:36:35.762759 kubelet[3163]: I0706 23:36:35.762675 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-djvlp" podStartSLOduration=11.233800104 podStartE2EDuration="18.762652173s" podCreationTimestamp="2025-07-06 23:36:17 +0000 UTC" firstStartedPulling="2025-07-06 23:36:17.880653987 +0000 UTC m=+5.734276818" lastFinishedPulling="2025-07-06 23:36:25.409506055 +0000 UTC m=+13.263128887" observedRunningTime="2025-07-06 23:36:30.561323429 +0000 UTC m=+18.414946269" watchObservedRunningTime="2025-07-06 23:36:35.762652173 +0000 UTC m=+23.616275012" Jul 6 23:36:35.940497 systemd-networkd[1827]: lxc95d7ef57d4bc: Gained IPv6LL Jul 6 23:36:38.077238 containerd[1917]: time="2025-07-06T23:36:38.075586014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:38.077801 containerd[1917]: time="2025-07-06T23:36:38.075685801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:38.077801 containerd[1917]: time="2025-07-06T23:36:38.075706843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:38.077801 containerd[1917]: time="2025-07-06T23:36:38.075840363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:38.105679 containerd[1917]: time="2025-07-06T23:36:38.102476916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:38.105679 containerd[1917]: time="2025-07-06T23:36:38.102564604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:38.105679 containerd[1917]: time="2025-07-06T23:36:38.102587638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:38.110660 containerd[1917]: time="2025-07-06T23:36:38.110542262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:38.143554 systemd[1]: Started cri-containerd-35c8770e6a385baa622d9a4c00c4f4fd249b65fc59be159841a04f3a486d9911.scope - libcontainer container 35c8770e6a385baa622d9a4c00c4f4fd249b65fc59be159841a04f3a486d9911. Jul 6 23:36:38.177967 systemd[1]: run-containerd-runc-k8s.io-31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef-runc.dGxhai.mount: Deactivated successfully. Jul 6 23:36:38.188577 systemd[1]: Started cri-containerd-31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef.scope - libcontainer container 31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef. Jul 6 23:36:38.277371 containerd[1917]: time="2025-07-06T23:36:38.277230243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4hwld,Uid:5c62e093-86eb-477a-8e41-c98b4994d09b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35c8770e6a385baa622d9a4c00c4f4fd249b65fc59be159841a04f3a486d9911\"" Jul 6 23:36:38.289897 containerd[1917]: time="2025-07-06T23:36:38.288900015Z" level=info msg="CreateContainer within sandbox \"35c8770e6a385baa622d9a4c00c4f4fd249b65fc59be159841a04f3a486d9911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:36:38.312780 containerd[1917]: time="2025-07-06T23:36:38.312639951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hjdkc,Uid:fdd1681b-974c-4bb8-a76e-3abc9b042f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef\"" Jul 6 23:36:38.320926 containerd[1917]: time="2025-07-06T23:36:38.320869325Z" level=info msg="CreateContainer within sandbox \"31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:36:38.351457 containerd[1917]: time="2025-07-06T23:36:38.351205606Z" level=info msg="CreateContainer within sandbox \"35c8770e6a385baa622d9a4c00c4f4fd249b65fc59be159841a04f3a486d9911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e433189d36c80ae6939dc27bc497371ae9425f15227eaaf3c332c32ce0b8905\"" Jul 6 23:36:38.351984 containerd[1917]: time="2025-07-06T23:36:38.351943400Z" level=info msg="StartContainer for \"5e433189d36c80ae6939dc27bc497371ae9425f15227eaaf3c332c32ce0b8905\"" Jul 6 23:36:38.359141 containerd[1917]: time="2025-07-06T23:36:38.358982179Z" level=info msg="CreateContainer within sandbox \"31a34bc4bdeef6f0e3322fadfcda66b292aab01fa51e94df651608da7faea4ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a92e1f9ceb9f84b36e203fa411617e6498a4cd850eb72a8d7c9039e6dedf648\"" Jul 6 23:36:38.359730 containerd[1917]: time="2025-07-06T23:36:38.359700774Z" level=info msg="StartContainer for \"5a92e1f9ceb9f84b36e203fa411617e6498a4cd850eb72a8d7c9039e6dedf648\"" Jul 6 23:36:38.399914 systemd[1]: Started cri-containerd-5e433189d36c80ae6939dc27bc497371ae9425f15227eaaf3c332c32ce0b8905.scope - libcontainer container 5e433189d36c80ae6939dc27bc497371ae9425f15227eaaf3c332c32ce0b8905. Jul 6 23:36:38.413786 systemd[1]: Started cri-containerd-5a92e1f9ceb9f84b36e203fa411617e6498a4cd850eb72a8d7c9039e6dedf648.scope - libcontainer container 5a92e1f9ceb9f84b36e203fa411617e6498a4cd850eb72a8d7c9039e6dedf648. Jul 6 23:36:38.463211 containerd[1917]: time="2025-07-06T23:36:38.463067871Z" level=info msg="StartContainer for \"5e433189d36c80ae6939dc27bc497371ae9425f15227eaaf3c332c32ce0b8905\" returns successfully" Jul 6 23:36:38.475471 containerd[1917]: time="2025-07-06T23:36:38.475363927Z" level=info msg="StartContainer for \"5a92e1f9ceb9f84b36e203fa411617e6498a4cd850eb72a8d7c9039e6dedf648\" returns successfully" Jul 6 23:36:38.555385 kubelet[3163]: I0706 23:36:38.555317 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4hwld" podStartSLOduration=20.555282648 podStartE2EDuration="20.555282648s" podCreationTimestamp="2025-07-06 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:38.537117443 +0000 UTC m=+26.390740282" watchObservedRunningTime="2025-07-06 23:36:38.555282648 +0000 UTC m=+26.408905483" Jul 6 23:36:39.532784 kubelet[3163]: I0706 23:36:39.532250 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hjdkc" podStartSLOduration=21.53223409 podStartE2EDuration="21.53223409s" podCreationTimestamp="2025-07-06 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:38.556726815 +0000 UTC m=+26.410349652" watchObservedRunningTime="2025-07-06 23:36:39.53223409 +0000 UTC m=+27.385856919" Jul 6 23:36:40.576147 ntpd[1886]: Listen normally on 8 cilium_host 192.168.0.14:123 Jul 6 23:36:40.576223 ntpd[1886]: Listen normally on 9 cilium_net [fe80::306e:3bff:fee5:12fc%4]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 8 cilium_host 192.168.0.14:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 9 cilium_net [fe80::306e:3bff:fee5:12fc%4]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 10 cilium_host [fe80::901a:a4ff:feb4:eec8%5]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 11 cilium_vxlan [fe80::40b:ddff:feee:8e17%6]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 12 lxc_health [fe80::485d:daff:fe03:5ae7%8]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 13 lxc95d7ef57d4bc [fe80::641b:41ff:fe44:b315%10]:123 Jul 6 23:36:40.577355 ntpd[1886]: 6 Jul 23:36:40 ntpd[1886]: Listen normally on 14 lxc4de4f1d300c4 [fe80::5c4e:c5ff:fe44:d244%12]:123 Jul 6 23:36:40.576272 ntpd[1886]: Listen normally on 10 cilium_host [fe80::901a:a4ff:feb4:eec8%5]:123 Jul 6 23:36:40.576334 ntpd[1886]: Listen normally on 11 cilium_vxlan [fe80::40b:ddff:feee:8e17%6]:123 Jul 6 23:36:40.576367 ntpd[1886]: Listen normally on 12 lxc_health [fe80::485d:daff:fe03:5ae7%8]:123 Jul 6 23:36:40.576395 ntpd[1886]: Listen normally on 13 lxc95d7ef57d4bc [fe80::641b:41ff:fe44:b315%10]:123 Jul 6 23:36:40.576422 ntpd[1886]: Listen normally on 14 lxc4de4f1d300c4 [fe80::5c4e:c5ff:fe44:d244%12]:123 Jul 6 23:36:44.592142 systemd[1]: Started sshd@7-172.31.20.250:22-139.178.68.195:46086.service - OpenSSH per-connection server daemon (139.178.68.195:46086). Jul 6 23:36:44.787437 sshd[4797]: Accepted publickey for core from 139.178.68.195 port 46086 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:36:44.790126 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.808466 systemd-logind[1891]: New session 8 of user core. Jul 6 23:36:44.812244 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:36:45.648186 sshd[4799]: Connection closed by 139.178.68.195 port 46086 Jul 6 23:36:45.649029 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:45.653454 systemd[1]: sshd@7-172.31.20.250:22-139.178.68.195:46086.service: Deactivated successfully. Jul 6 23:36:45.656184 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:36:45.657769 systemd-logind[1891]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:36:45.659001 systemd-logind[1891]: Removed session 8. Jul 6 23:36:50.686643 systemd[1]: Started sshd@8-172.31.20.250:22-139.178.68.195:56438.service - OpenSSH per-connection server daemon (139.178.68.195:56438). Jul 6 23:36:50.871952 sshd[4817]: Accepted publickey for core from 139.178.68.195 port 56438 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:36:50.872707 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:50.878982 systemd-logind[1891]: New session 9 of user core. Jul 6 23:36:50.887556 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:36:51.124861 sshd[4819]: Connection closed by 139.178.68.195 port 56438 Jul 6 23:36:51.125998 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:51.132881 systemd[1]: sshd@8-172.31.20.250:22-139.178.68.195:56438.service: Deactivated successfully. Jul 6 23:36:51.134540 systemd-logind[1891]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:36:51.135424 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:36:51.136796 systemd-logind[1891]: Removed session 9. Jul 6 23:36:56.165090 systemd[1]: Started sshd@9-172.31.20.250:22-139.178.68.195:56452.service - OpenSSH per-connection server daemon (139.178.68.195:56452). Jul 6 23:36:56.334068 sshd[4832]: Accepted publickey for core from 139.178.68.195 port 56452 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:36:56.335607 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:56.341553 systemd-logind[1891]: New session 10 of user core. Jul 6 23:36:56.349569 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:36:56.544964 sshd[4834]: Connection closed by 139.178.68.195 port 56452 Jul 6 23:36:56.545524 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:56.549151 systemd[1]: sshd@9-172.31.20.250:22-139.178.68.195:56452.service: Deactivated successfully. Jul 6 23:36:56.551656 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:36:56.553601 systemd-logind[1891]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:36:56.555567 systemd-logind[1891]: Removed session 10. Jul 6 23:37:01.637783 systemd[1]: Started sshd@10-172.31.20.250:22-139.178.68.195:50184.service - OpenSSH per-connection server daemon (139.178.68.195:50184). Jul 6 23:37:01.887386 sshd[4847]: Accepted publickey for core from 139.178.68.195 port 50184 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:01.888141 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:01.922739 systemd-logind[1891]: New session 11 of user core. Jul 6 23:37:01.935741 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:37:02.211831 sshd[4849]: Connection closed by 139.178.68.195 port 50184 Jul 6 23:37:02.214887 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:02.222440 systemd-logind[1891]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:37:02.223258 systemd[1]: sshd@10-172.31.20.250:22-139.178.68.195:50184.service: Deactivated successfully. Jul 6 23:37:02.225804 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:37:02.227072 systemd-logind[1891]: Removed session 11. Jul 6 23:37:02.260456 systemd[1]: Started sshd@11-172.31.20.250:22-139.178.68.195:50186.service - OpenSSH per-connection server daemon (139.178.68.195:50186). Jul 6 23:37:02.456323 sshd[4862]: Accepted publickey for core from 139.178.68.195 port 50186 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:02.458676 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:02.465340 systemd-logind[1891]: New session 12 of user core. Jul 6 23:37:02.471553 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:37:02.725673 sshd[4864]: Connection closed by 139.178.68.195 port 50186 Jul 6 23:37:02.728624 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:02.734246 systemd-logind[1891]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:37:02.735926 systemd[1]: sshd@11-172.31.20.250:22-139.178.68.195:50186.service: Deactivated successfully. Jul 6 23:37:02.740075 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:37:02.742348 systemd-logind[1891]: Removed session 12. Jul 6 23:37:02.766685 systemd[1]: Started sshd@12-172.31.20.250:22-139.178.68.195:50196.service - OpenSSH per-connection server daemon (139.178.68.195:50196). Jul 6 23:37:02.932459 sshd[4875]: Accepted publickey for core from 139.178.68.195 port 50196 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:02.936036 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:02.941568 systemd-logind[1891]: New session 13 of user core. Jul 6 23:37:02.947635 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:37:03.153805 sshd[4877]: Connection closed by 139.178.68.195 port 50196 Jul 6 23:37:03.155316 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:03.162745 systemd[1]: sshd@12-172.31.20.250:22-139.178.68.195:50196.service: Deactivated successfully. Jul 6 23:37:03.168497 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:37:03.170995 systemd-logind[1891]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:37:03.173164 systemd-logind[1891]: Removed session 13. Jul 6 23:37:08.191667 systemd[1]: Started sshd@13-172.31.20.250:22-139.178.68.195:36240.service - OpenSSH per-connection server daemon (139.178.68.195:36240). Jul 6 23:37:08.354303 sshd[4891]: Accepted publickey for core from 139.178.68.195 port 36240 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:08.355887 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:08.360983 systemd-logind[1891]: New session 14 of user core. Jul 6 23:37:08.369605 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:37:08.561506 sshd[4893]: Connection closed by 139.178.68.195 port 36240 Jul 6 23:37:08.562357 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:08.566103 systemd[1]: sshd@13-172.31.20.250:22-139.178.68.195:36240.service: Deactivated successfully. Jul 6 23:37:08.568318 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:37:08.569168 systemd-logind[1891]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:37:08.571460 systemd-logind[1891]: Removed session 14. Jul 6 23:37:13.603118 systemd[1]: Started sshd@14-172.31.20.250:22-139.178.68.195:36246.service - OpenSSH per-connection server daemon (139.178.68.195:36246). Jul 6 23:37:13.772602 sshd[4907]: Accepted publickey for core from 139.178.68.195 port 36246 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:13.774440 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:13.780193 systemd-logind[1891]: New session 15 of user core. Jul 6 23:37:13.786543 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:37:13.973448 sshd[4909]: Connection closed by 139.178.68.195 port 36246 Jul 6 23:37:13.975186 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:13.979589 systemd-logind[1891]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:37:13.980899 systemd[1]: sshd@14-172.31.20.250:22-139.178.68.195:36246.service: Deactivated successfully. Jul 6 23:37:13.983884 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:37:13.985067 systemd-logind[1891]: Removed session 15. Jul 6 23:37:14.019764 systemd[1]: Started sshd@15-172.31.20.250:22-139.178.68.195:36252.service - OpenSSH per-connection server daemon (139.178.68.195:36252). Jul 6 23:37:14.188327 sshd[4921]: Accepted publickey for core from 139.178.68.195 port 36252 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:14.188940 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:14.194194 systemd-logind[1891]: New session 16 of user core. Jul 6 23:37:14.204539 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:37:14.795779 sshd[4923]: Connection closed by 139.178.68.195 port 36252 Jul 6 23:37:14.796865 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:14.801125 systemd[1]: sshd@15-172.31.20.250:22-139.178.68.195:36252.service: Deactivated successfully. Jul 6 23:37:14.803554 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:37:14.804929 systemd-logind[1891]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:37:14.806579 systemd-logind[1891]: Removed session 16. Jul 6 23:37:14.830643 systemd[1]: Started sshd@16-172.31.20.250:22-139.178.68.195:36264.service - OpenSSH per-connection server daemon (139.178.68.195:36264). Jul 6 23:37:15.006873 sshd[4935]: Accepted publickey for core from 139.178.68.195 port 36264 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:15.009161 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:15.015642 systemd-logind[1891]: New session 17 of user core. Jul 6 23:37:15.022552 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:37:16.827190 sshd[4937]: Connection closed by 139.178.68.195 port 36264 Jul 6 23:37:16.827606 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:16.837895 systemd[1]: sshd@16-172.31.20.250:22-139.178.68.195:36264.service: Deactivated successfully. Jul 6 23:37:16.842256 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:37:16.842787 systemd[1]: session-17.scope: Consumed 560ms CPU time, 64.2M memory peak. Jul 6 23:37:16.846155 systemd-logind[1891]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:37:16.847760 systemd-logind[1891]: Removed session 17. Jul 6 23:37:16.867972 systemd[1]: Started sshd@17-172.31.20.250:22-139.178.68.195:36278.service - OpenSSH per-connection server daemon (139.178.68.195:36278). Jul 6 23:37:17.039514 sshd[4955]: Accepted publickey for core from 139.178.68.195 port 36278 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:17.041380 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:17.047188 systemd-logind[1891]: New session 18 of user core. Jul 6 23:37:17.051487 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:37:17.415151 sshd[4957]: Connection closed by 139.178.68.195 port 36278 Jul 6 23:37:17.416428 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:17.420816 systemd[1]: sshd@17-172.31.20.250:22-139.178.68.195:36278.service: Deactivated successfully. Jul 6 23:37:17.425733 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:37:17.427003 systemd-logind[1891]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:37:17.428044 systemd-logind[1891]: Removed session 18. Jul 6 23:37:17.460046 systemd[1]: Started sshd@18-172.31.20.250:22-139.178.68.195:36286.service - OpenSSH per-connection server daemon (139.178.68.195:36286). Jul 6 23:37:17.625224 sshd[4967]: Accepted publickey for core from 139.178.68.195 port 36286 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:17.626915 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:17.633532 systemd-logind[1891]: New session 19 of user core. Jul 6 23:37:17.640529 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:37:17.844352 sshd[4969]: Connection closed by 139.178.68.195 port 36286 Jul 6 23:37:17.845697 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:17.850153 systemd[1]: sshd@18-172.31.20.250:22-139.178.68.195:36286.service: Deactivated successfully. Jul 6 23:37:17.852849 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:37:17.853815 systemd-logind[1891]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:37:17.855256 systemd-logind[1891]: Removed session 19. Jul 6 23:37:22.879158 systemd[1]: Started sshd@19-172.31.20.250:22-139.178.68.195:50700.service - OpenSSH per-connection server daemon (139.178.68.195:50700). Jul 6 23:37:23.051339 sshd[4985]: Accepted publickey for core from 139.178.68.195 port 50700 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:23.052645 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:23.058838 systemd-logind[1891]: New session 20 of user core. Jul 6 23:37:23.067567 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:37:23.283506 sshd[4987]: Connection closed by 139.178.68.195 port 50700 Jul 6 23:37:23.285064 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:23.289595 systemd[1]: sshd@19-172.31.20.250:22-139.178.68.195:50700.service: Deactivated successfully. Jul 6 23:37:23.292784 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:37:23.294206 systemd-logind[1891]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:37:23.295870 systemd-logind[1891]: Removed session 20. Jul 6 23:37:28.321653 systemd[1]: Started sshd@20-172.31.20.250:22-139.178.68.195:42518.service - OpenSSH per-connection server daemon (139.178.68.195:42518). Jul 6 23:37:28.483067 sshd[5002]: Accepted publickey for core from 139.178.68.195 port 42518 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:28.484564 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:28.490218 systemd-logind[1891]: New session 21 of user core. Jul 6 23:37:28.499524 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:37:28.682931 sshd[5004]: Connection closed by 139.178.68.195 port 42518 Jul 6 23:37:28.683551 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:28.686765 systemd[1]: sshd@20-172.31.20.250:22-139.178.68.195:42518.service: Deactivated successfully. Jul 6 23:37:28.689305 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:37:28.691672 systemd-logind[1891]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:37:28.693673 systemd-logind[1891]: Removed session 21. Jul 6 23:37:33.728406 systemd[1]: Started sshd@21-172.31.20.250:22-139.178.68.195:42534.service - OpenSSH per-connection server daemon (139.178.68.195:42534). Jul 6 23:37:33.896336 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 42534 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:33.897399 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:33.902523 systemd-logind[1891]: New session 22 of user core. Jul 6 23:37:33.909526 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:37:34.098359 sshd[5018]: Connection closed by 139.178.68.195 port 42534 Jul 6 23:37:34.098899 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:34.102961 systemd-logind[1891]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:37:34.103732 systemd[1]: sshd@21-172.31.20.250:22-139.178.68.195:42534.service: Deactivated successfully. Jul 6 23:37:34.105861 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:37:34.107028 systemd-logind[1891]: Removed session 22. Jul 6 23:37:39.135606 systemd[1]: Started sshd@22-172.31.20.250:22-139.178.68.195:44076.service - OpenSSH per-connection server daemon (139.178.68.195:44076). Jul 6 23:37:39.298325 sshd[5031]: Accepted publickey for core from 139.178.68.195 port 44076 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:39.299756 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:39.304655 systemd-logind[1891]: New session 23 of user core. Jul 6 23:37:39.317539 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:37:39.497659 sshd[5033]: Connection closed by 139.178.68.195 port 44076 Jul 6 23:37:39.498629 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:39.503176 systemd[1]: sshd@22-172.31.20.250:22-139.178.68.195:44076.service: Deactivated successfully. Jul 6 23:37:39.505494 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:37:39.506674 systemd-logind[1891]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:37:39.508150 systemd-logind[1891]: Removed session 23. Jul 6 23:37:39.538931 systemd[1]: Started sshd@23-172.31.20.250:22-139.178.68.195:44080.service - OpenSSH per-connection server daemon (139.178.68.195:44080). Jul 6 23:37:39.700976 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 44080 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:39.702614 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:39.708840 systemd-logind[1891]: New session 24 of user core. Jul 6 23:37:39.715512 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:37:41.318093 systemd[1]: run-containerd-runc-k8s.io-750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714-runc.bjpGxF.mount: Deactivated successfully. Jul 6 23:37:41.344253 containerd[1917]: time="2025-07-06T23:37:41.344178585Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:37:41.362542 containerd[1917]: time="2025-07-06T23:37:41.362494860Z" level=info msg="StopContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" with timeout 30 (s)" Jul 6 23:37:41.362893 containerd[1917]: time="2025-07-06T23:37:41.362646206Z" level=info msg="StopContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" with timeout 2 (s)" Jul 6 23:37:41.364316 containerd[1917]: time="2025-07-06T23:37:41.364256754Z" level=info msg="Stop container \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" with signal terminated" Jul 6 23:37:41.364472 containerd[1917]: time="2025-07-06T23:37:41.364255055Z" level=info msg="Stop container \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" with signal terminated" Jul 6 23:37:41.380782 systemd-networkd[1827]: lxc_health: Link DOWN Jul 6 23:37:41.380792 systemd-networkd[1827]: lxc_health: Lost carrier Jul 6 23:37:41.383543 systemd[1]: cri-containerd-461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d.scope: Deactivated successfully. Jul 6 23:37:41.401670 systemd[1]: cri-containerd-750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714.scope: Deactivated successfully. Jul 6 23:37:41.402203 systemd[1]: cri-containerd-750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714.scope: Consumed 7.392s CPU time, 198.6M memory peak, 75.6M read from disk, 13.3M written to disk. Jul 6 23:37:41.423203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d-rootfs.mount: Deactivated successfully. Jul 6 23:37:41.432774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714-rootfs.mount: Deactivated successfully. Jul 6 23:37:41.466914 containerd[1917]: time="2025-07-06T23:37:41.466830545Z" level=info msg="shim disconnected" id=750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714 namespace=k8s.io Jul 6 23:37:41.466914 containerd[1917]: time="2025-07-06T23:37:41.466889163Z" level=warning msg="cleaning up after shim disconnected" id=750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714 namespace=k8s.io Jul 6 23:37:41.466914 containerd[1917]: time="2025-07-06T23:37:41.466899698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:41.467708 containerd[1917]: time="2025-07-06T23:37:41.467522529Z" level=info msg="shim disconnected" id=461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d namespace=k8s.io Jul 6 23:37:41.467708 containerd[1917]: time="2025-07-06T23:37:41.467560760Z" level=warning msg="cleaning up after shim disconnected" id=461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d namespace=k8s.io Jul 6 23:37:41.467708 containerd[1917]: time="2025-07-06T23:37:41.467568980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:41.488878 containerd[1917]: time="2025-07-06T23:37:41.488806824Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:37:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:37:41.494573 containerd[1917]: time="2025-07-06T23:37:41.494527327Z" level=info msg="StopContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" returns successfully" Jul 6 23:37:41.496612 containerd[1917]: time="2025-07-06T23:37:41.496396773Z" level=info msg="StopContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" returns successfully" Jul 6 23:37:41.503002 containerd[1917]: time="2025-07-06T23:37:41.502945442Z" level=info msg="StopPodSandbox for \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\"" Jul 6 23:37:41.503264 containerd[1917]: time="2025-07-06T23:37:41.503000955Z" level=info msg="StopPodSandbox for \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\"" Jul 6 23:37:41.510313 containerd[1917]: time="2025-07-06T23:37:41.504273854Z" level=info msg="Container to stop \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.510673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2-shm.mount: Deactivated successfully. Jul 6 23:37:41.511260 containerd[1917]: time="2025-07-06T23:37:41.504260488Z" level=info msg="Container to stop \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.511333 containerd[1917]: time="2025-07-06T23:37:41.511275304Z" level=info msg="Container to stop \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.511333 containerd[1917]: time="2025-07-06T23:37:41.511304593Z" level=info msg="Container to stop \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.511333 containerd[1917]: time="2025-07-06T23:37:41.511314554Z" level=info msg="Container to stop \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.511333 containerd[1917]: time="2025-07-06T23:37:41.511322721Z" level=info msg="Container to stop \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:37:41.526083 systemd[1]: cri-containerd-171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2.scope: Deactivated successfully. Jul 6 23:37:41.537239 systemd[1]: cri-containerd-0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55.scope: Deactivated successfully. Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569119592Z" level=info msg="shim disconnected" id=0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55 namespace=k8s.io Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569172187Z" level=warning msg="cleaning up after shim disconnected" id=0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55 namespace=k8s.io Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569180411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569448312Z" level=info msg="shim disconnected" id=171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2 namespace=k8s.io Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569483961Z" level=warning msg="cleaning up after shim disconnected" id=171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2 namespace=k8s.io Jul 6 23:37:41.570129 containerd[1917]: time="2025-07-06T23:37:41.569491799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:41.588509 containerd[1917]: time="2025-07-06T23:37:41.588473151Z" level=info msg="TearDown network for sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" successfully" Jul 6 23:37:41.588662 containerd[1917]: time="2025-07-06T23:37:41.588646953Z" level=info msg="StopPodSandbox for \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" returns successfully" Jul 6 23:37:41.594214 containerd[1917]: time="2025-07-06T23:37:41.594178053Z" level=info msg="TearDown network for sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" successfully" Jul 6 23:37:41.594214 containerd[1917]: time="2025-07-06T23:37:41.594209270Z" level=info msg="StopPodSandbox for \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" returns successfully" Jul 6 23:37:41.639879 kubelet[3163]: I0706 23:37:41.639786 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cni-path\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640446 kubelet[3163]: I0706 23:37:41.640421 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-run\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640522 kubelet[3163]: I0706 23:37:41.640462 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-cgroup\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640522 kubelet[3163]: I0706 23:37:41.640484 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-xtables-lock\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640522 kubelet[3163]: I0706 23:37:41.640517 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-hubble-tls\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640651 kubelet[3163]: I0706 23:37:41.640540 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-kernel\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640651 kubelet[3163]: I0706 23:37:41.640570 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/669550b1-e723-4136-91ac-bec2a59c905f-cilium-config-path\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640651 kubelet[3163]: I0706 23:37:41.640593 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-net\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640651 kubelet[3163]: I0706 23:37:41.640620 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sspml\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-kube-api-access-sspml\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640651 kubelet[3163]: I0706 23:37:41.640642 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-bpf-maps\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640668 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26f56\" (UniqueName: \"kubernetes.io/projected/82545500-2cf2-4c39-8e58-72493ed6fd02-kube-api-access-26f56\") pod \"82545500-2cf2-4c39-8e58-72493ed6fd02\" (UID: \"82545500-2cf2-4c39-8e58-72493ed6fd02\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640694 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82545500-2cf2-4c39-8e58-72493ed6fd02-cilium-config-path\") pod \"82545500-2cf2-4c39-8e58-72493ed6fd02\" (UID: \"82545500-2cf2-4c39-8e58-72493ed6fd02\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640717 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-lib-modules\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640764 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/669550b1-e723-4136-91ac-bec2a59c905f-clustermesh-secrets\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640786 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-etc-cni-netd\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.640862 kubelet[3163]: I0706 23:37:41.640812 3163 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-hostproc\") pod \"669550b1-e723-4136-91ac-bec2a59c905f\" (UID: \"669550b1-e723-4136-91ac-bec2a59c905f\") " Jul 6 23:37:41.642640 kubelet[3163]: I0706 23:37:41.640087 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cni-path" (OuterVolumeSpecName: "cni-path") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.642640 kubelet[3163]: I0706 23:37:41.641901 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.642640 kubelet[3163]: I0706 23:37:41.641934 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.642640 kubelet[3163]: I0706 23:37:41.641954 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.642640 kubelet[3163]: I0706 23:37:41.641974 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.642976 kubelet[3163]: I0706 23:37:41.640888 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-hostproc" (OuterVolumeSpecName: "hostproc") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.649483 kubelet[3163]: I0706 23:37:41.649415 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:37:41.649483 kubelet[3163]: I0706 23:37:41.649475 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.653316 kubelet[3163]: I0706 23:37:41.652094 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/669550b1-e723-4136-91ac-bec2a59c905f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:37:41.653316 kubelet[3163]: I0706 23:37:41.653116 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-kube-api-access-sspml" (OuterVolumeSpecName: "kube-api-access-sspml") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "kube-api-access-sspml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:37:41.653316 kubelet[3163]: I0706 23:37:41.653160 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.654241 kubelet[3163]: I0706 23:37:41.654219 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.656809 kubelet[3163]: I0706 23:37:41.656782 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82545500-2cf2-4c39-8e58-72493ed6fd02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82545500-2cf2-4c39-8e58-72493ed6fd02" (UID: "82545500-2cf2-4c39-8e58-72493ed6fd02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:37:41.657649 kubelet[3163]: I0706 23:37:41.657609 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82545500-2cf2-4c39-8e58-72493ed6fd02-kube-api-access-26f56" (OuterVolumeSpecName: "kube-api-access-26f56") pod "82545500-2cf2-4c39-8e58-72493ed6fd02" (UID: "82545500-2cf2-4c39-8e58-72493ed6fd02"). InnerVolumeSpecName "kube-api-access-26f56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:37:41.659475 kubelet[3163]: I0706 23:37:41.659300 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:37:41.661776 kubelet[3163]: I0706 23:37:41.661741 3163 scope.go:117] "RemoveContainer" containerID="461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d" Jul 6 23:37:41.662490 kubelet[3163]: I0706 23:37:41.662460 3163 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669550b1-e723-4136-91ac-bec2a59c905f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "669550b1-e723-4136-91ac-bec2a59c905f" (UID: "669550b1-e723-4136-91ac-bec2a59c905f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:37:41.667873 systemd[1]: Removed slice kubepods-besteffort-pod82545500_2cf2_4c39_8e58_72493ed6fd02.slice - libcontainer container kubepods-besteffort-pod82545500_2cf2_4c39_8e58_72493ed6fd02.slice. Jul 6 23:37:41.674466 containerd[1917]: time="2025-07-06T23:37:41.674409405Z" level=info msg="RemoveContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\"" Jul 6 23:37:41.679795 containerd[1917]: time="2025-07-06T23:37:41.679739678Z" level=info msg="RemoveContainer for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" returns successfully" Jul 6 23:37:41.690107 kubelet[3163]: I0706 23:37:41.687770 3163 scope.go:117] "RemoveContainer" containerID="461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d" Jul 6 23:37:41.694088 containerd[1917]: time="2025-07-06T23:37:41.693441154Z" level=error msg="ContainerStatus for \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\": not found" Jul 6 23:37:41.698323 systemd[1]: Removed slice kubepods-burstable-pod669550b1_e723_4136_91ac_bec2a59c905f.slice - libcontainer container kubepods-burstable-pod669550b1_e723_4136_91ac_bec2a59c905f.slice. Jul 6 23:37:41.698488 systemd[1]: kubepods-burstable-pod669550b1_e723_4136_91ac_bec2a59c905f.slice: Consumed 7.504s CPU time, 199M memory peak, 76.6M read from disk, 13.3M written to disk. Jul 6 23:37:41.710745 kubelet[3163]: E0706 23:37:41.710708 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\": not found" containerID="461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d" Jul 6 23:37:41.738727 kubelet[3163]: I0706 23:37:41.710747 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d"} err="failed to get container status \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\": rpc error: code = NotFound desc = an error occurred when try to find container \"461d4a4e0f5be04237fa3d0618eb7006cb81747c1ce59789a4e06d65b2fec85d\": not found" Jul 6 23:37:41.738727 kubelet[3163]: I0706 23:37:41.738605 3163 scope.go:117] "RemoveContainer" containerID="750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714" Jul 6 23:37:41.741172 containerd[1917]: time="2025-07-06T23:37:41.740612400Z" level=info msg="RemoveContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742151 3163 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/669550b1-e723-4136-91ac-bec2a59c905f-cilium-config-path\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742184 3163 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-hubble-tls\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742208 3163 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-kernel\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742217 3163 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-host-proc-sys-net\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742225 3163 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sspml\" (UniqueName: \"kubernetes.io/projected/669550b1-e723-4136-91ac-bec2a59c905f-kube-api-access-sspml\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742234 3163 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-bpf-maps\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742247 3163 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26f56\" (UniqueName: \"kubernetes.io/projected/82545500-2cf2-4c39-8e58-72493ed6fd02-kube-api-access-26f56\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742421 kubelet[3163]: I0706 23:37:41.742260 3163 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82545500-2cf2-4c39-8e58-72493ed6fd02-cilium-config-path\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742280 3163 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-lib-modules\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742297 3163 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/669550b1-e723-4136-91ac-bec2a59c905f-clustermesh-secrets\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742306 3163 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-etc-cni-netd\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742314 3163 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-hostproc\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742321 3163 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cni-path\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742329 3163 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-run\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742336 3163 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-cilium-cgroup\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.742811 kubelet[3163]: I0706 23:37:41.742355 3163 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669550b1-e723-4136-91ac-bec2a59c905f-xtables-lock\") on node \"ip-172-31-20-250\" DevicePath \"\"" Jul 6 23:37:41.746700 containerd[1917]: time="2025-07-06T23:37:41.746654950Z" level=info msg="RemoveContainer for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" returns successfully" Jul 6 23:37:41.747422 kubelet[3163]: I0706 23:37:41.746938 3163 scope.go:117] "RemoveContainer" containerID="8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1" Jul 6 23:37:41.750929 containerd[1917]: time="2025-07-06T23:37:41.750891205Z" level=info msg="RemoveContainer for \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\"" Jul 6 23:37:41.756970 containerd[1917]: time="2025-07-06T23:37:41.756928404Z" level=info msg="RemoveContainer for \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\" returns successfully" Jul 6 23:37:41.757524 kubelet[3163]: I0706 23:37:41.757200 3163 scope.go:117] "RemoveContainer" containerID="8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720" Jul 6 23:37:41.763468 containerd[1917]: time="2025-07-06T23:37:41.763119556Z" level=info msg="RemoveContainer for \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\"" Jul 6 23:37:41.768651 containerd[1917]: time="2025-07-06T23:37:41.768597186Z" level=info msg="RemoveContainer for \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\" returns successfully" Jul 6 23:37:41.769011 kubelet[3163]: I0706 23:37:41.768860 3163 scope.go:117] "RemoveContainer" containerID="68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8" Jul 6 23:37:41.770388 containerd[1917]: time="2025-07-06T23:37:41.770356018Z" level=info msg="RemoveContainer for \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\"" Jul 6 23:37:41.775751 containerd[1917]: time="2025-07-06T23:37:41.775682685Z" level=info msg="RemoveContainer for \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\" returns successfully" Jul 6 23:37:41.776015 kubelet[3163]: I0706 23:37:41.775964 3163 scope.go:117] "RemoveContainer" containerID="e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4" Jul 6 23:37:41.776945 containerd[1917]: time="2025-07-06T23:37:41.776907190Z" level=info msg="RemoveContainer for \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\"" Jul 6 23:37:41.782184 containerd[1917]: time="2025-07-06T23:37:41.782135117Z" level=info msg="RemoveContainer for \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\" returns successfully" Jul 6 23:37:41.782409 kubelet[3163]: I0706 23:37:41.782378 3163 scope.go:117] "RemoveContainer" containerID="750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714" Jul 6 23:37:41.782680 containerd[1917]: time="2025-07-06T23:37:41.782590893Z" level=error msg="ContainerStatus for \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\": not found" Jul 6 23:37:41.782751 kubelet[3163]: E0706 23:37:41.782714 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\": not found" containerID="750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714" Jul 6 23:37:41.782792 kubelet[3163]: I0706 23:37:41.782753 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714"} err="failed to get container status \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\": rpc error: code = NotFound desc = an error occurred when try to find container \"750c3a2fe55bc669e3a6924ec0813f3a93def3b096589ea76e39047469303714\": not found" Jul 6 23:37:41.782792 kubelet[3163]: I0706 23:37:41.782773 3163 scope.go:117] "RemoveContainer" containerID="8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1" Jul 6 23:37:41.782946 containerd[1917]: time="2025-07-06T23:37:41.782913219Z" level=error msg="ContainerStatus for \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\": not found" Jul 6 23:37:41.783082 kubelet[3163]: E0706 23:37:41.783032 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\": not found" containerID="8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1" Jul 6 23:37:41.783082 kubelet[3163]: I0706 23:37:41.783058 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1"} err="failed to get container status \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8564b9d316dd2e8efd4146e0c9bb4a8701f042e3f9d77314c525d428d2b6d8e1\": not found" Jul 6 23:37:41.783082 kubelet[3163]: I0706 23:37:41.783072 3163 scope.go:117] "RemoveContainer" containerID="8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720" Jul 6 23:37:41.783360 containerd[1917]: time="2025-07-06T23:37:41.783277438Z" level=error msg="ContainerStatus for \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\": not found" Jul 6 23:37:41.783421 kubelet[3163]: E0706 23:37:41.783381 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\": not found" containerID="8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720" Jul 6 23:37:41.783421 kubelet[3163]: I0706 23:37:41.783397 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720"} err="failed to get container status \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f2798bd8b656cd4cc4b5cbc47b3f488df032ff15de43956c0b57977c94f2720\": not found" Jul 6 23:37:41.783421 kubelet[3163]: I0706 23:37:41.783409 3163 scope.go:117] "RemoveContainer" containerID="68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8" Jul 6 23:37:41.783590 containerd[1917]: time="2025-07-06T23:37:41.783534377Z" level=error msg="ContainerStatus for \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\": not found" Jul 6 23:37:41.783695 kubelet[3163]: E0706 23:37:41.783642 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\": not found" containerID="68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8" Jul 6 23:37:41.783695 kubelet[3163]: I0706 23:37:41.783685 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8"} err="failed to get container status \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"68e73bda3c555c161fad260529f1fb74ecc5ac6ab20c194532461d4a0dd8b0d8\": not found" Jul 6 23:37:41.783768 kubelet[3163]: I0706 23:37:41.783698 3163 scope.go:117] "RemoveContainer" containerID="e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4" Jul 6 23:37:41.783923 containerd[1917]: time="2025-07-06T23:37:41.783897319Z" level=error msg="ContainerStatus for \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\": not found" Jul 6 23:37:41.784007 kubelet[3163]: E0706 23:37:41.783996 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\": not found" containerID="e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4" Jul 6 23:37:41.784041 kubelet[3163]: I0706 23:37:41.784013 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4"} err="failed to get container status \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0ffd2105629dd7f9946271b0bc9c0e3ece44a5fb99e998e99d7211c4fd991f4\": not found" Jul 6 23:37:42.310143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2-rootfs.mount: Deactivated successfully. Jul 6 23:37:42.310263 systemd[1]: var-lib-kubelet-pods-82545500\x2d2cf2\x2d4c39\x2d8e58\x2d72493ed6fd02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26f56.mount: Deactivated successfully. Jul 6 23:37:42.310373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55-rootfs.mount: Deactivated successfully. Jul 6 23:37:42.310433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55-shm.mount: Deactivated successfully. Jul 6 23:37:42.310495 systemd[1]: var-lib-kubelet-pods-669550b1\x2de723\x2d4136\x2d91ac\x2dbec2a59c905f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsspml.mount: Deactivated successfully. Jul 6 23:37:42.310555 systemd[1]: var-lib-kubelet-pods-669550b1\x2de723\x2d4136\x2d91ac\x2dbec2a59c905f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:37:42.310625 systemd[1]: var-lib-kubelet-pods-669550b1\x2de723\x2d4136\x2d91ac\x2dbec2a59c905f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:37:42.343890 kubelet[3163]: I0706 23:37:42.343034 3163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669550b1-e723-4136-91ac-bec2a59c905f" path="/var/lib/kubelet/pods/669550b1-e723-4136-91ac-bec2a59c905f/volumes" Jul 6 23:37:42.343890 kubelet[3163]: I0706 23:37:42.343652 3163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82545500-2cf2-4c39-8e58-72493ed6fd02" path="/var/lib/kubelet/pods/82545500-2cf2-4c39-8e58-72493ed6fd02/volumes" Jul 6 23:37:42.420079 kubelet[3163]: E0706 23:37:42.420030 3163 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:37:43.256578 sshd[5046]: Connection closed by 139.178.68.195 port 44080 Jul 6 23:37:43.257216 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:43.261937 systemd-logind[1891]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:37:43.262614 systemd[1]: sshd@23-172.31.20.250:22-139.178.68.195:44080.service: Deactivated successfully. Jul 6 23:37:43.265543 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:37:43.266788 systemd-logind[1891]: Removed session 24. Jul 6 23:37:43.298483 systemd[1]: Started sshd@24-172.31.20.250:22-139.178.68.195:44096.service - OpenSSH per-connection server daemon (139.178.68.195:44096). Jul 6 23:37:43.472606 sshd[5206]: Accepted publickey for core from 139.178.68.195 port 44096 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:43.474226 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:43.480846 systemd-logind[1891]: New session 25 of user core. Jul 6 23:37:43.487560 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:37:43.576043 ntpd[1886]: Deleting interface #12 lxc_health, fe80::485d:daff:fe03:5ae7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Jul 6 23:37:43.576494 ntpd[1886]: 6 Jul 23:37:43 ntpd[1886]: Deleting interface #12 lxc_health, fe80::485d:daff:fe03:5ae7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Jul 6 23:37:44.146959 sshd[5208]: Connection closed by 139.178.68.195 port 44096 Jul 6 23:37:44.147693 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:44.151476 systemd-logind[1891]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:37:44.151721 systemd[1]: sshd@24-172.31.20.250:22-139.178.68.195:44096.service: Deactivated successfully. Jul 6 23:37:44.155227 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:37:44.160800 systemd-logind[1891]: Removed session 25. Jul 6 23:37:44.188990 kubelet[3163]: E0706 23:37:44.188963 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="mount-cgroup" Jul 6 23:37:44.188990 kubelet[3163]: E0706 23:37:44.188982 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="cilium-agent" Jul 6 23:37:44.188990 kubelet[3163]: E0706 23:37:44.188989 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="apply-sysctl-overwrites" Jul 6 23:37:44.189357 kubelet[3163]: E0706 23:37:44.188995 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="mount-bpf-fs" Jul 6 23:37:44.189357 kubelet[3163]: E0706 23:37:44.189002 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82545500-2cf2-4c39-8e58-72493ed6fd02" containerName="cilium-operator" Jul 6 23:37:44.189357 kubelet[3163]: E0706 23:37:44.189008 3163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="clean-cilium-state" Jul 6 23:37:44.189357 kubelet[3163]: I0706 23:37:44.189033 3163 memory_manager.go:354] "RemoveStaleState removing state" podUID="669550b1-e723-4136-91ac-bec2a59c905f" containerName="cilium-agent" Jul 6 23:37:44.189357 kubelet[3163]: I0706 23:37:44.189039 3163 memory_manager.go:354] "RemoveStaleState removing state" podUID="82545500-2cf2-4c39-8e58-72493ed6fd02" containerName="cilium-operator" Jul 6 23:37:44.190529 systemd[1]: Started sshd@25-172.31.20.250:22-139.178.68.195:44106.service - OpenSSH per-connection server daemon (139.178.68.195:44106). Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262449 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eca6229a-7928-4498-b7f6-2eafa127e075-clustermesh-secrets\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262488 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-xtables-lock\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262509 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-hostproc\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262524 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-cilium-cgroup\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262541 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-cni-path\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263046 kubelet[3163]: I0706 23:37:44.262559 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eca6229a-7928-4498-b7f6-2eafa127e075-cilium-config-path\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262573 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eca6229a-7928-4498-b7f6-2eafa127e075-cilium-ipsec-secrets\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262590 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqmp\" (UniqueName: \"kubernetes.io/projected/eca6229a-7928-4498-b7f6-2eafa127e075-kube-api-access-mkqmp\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262607 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-host-proc-sys-kernel\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262622 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-etc-cni-netd\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262636 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-lib-modules\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263355 kubelet[3163]: I0706 23:37:44.262651 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-bpf-maps\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263516 kubelet[3163]: I0706 23:37:44.262664 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eca6229a-7928-4498-b7f6-2eafa127e075-hubble-tls\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263516 kubelet[3163]: I0706 23:37:44.262679 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-cilium-run\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.263516 kubelet[3163]: I0706 23:37:44.262697 3163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eca6229a-7928-4498-b7f6-2eafa127e075-host-proc-sys-net\") pod \"cilium-gphn5\" (UID: \"eca6229a-7928-4498-b7f6-2eafa127e075\") " pod="kube-system/cilium-gphn5" Jul 6 23:37:44.269573 systemd[1]: Created slice kubepods-burstable-podeca6229a_7928_4498_b7f6_2eafa127e075.slice - libcontainer container kubepods-burstable-podeca6229a_7928_4498_b7f6_2eafa127e075.slice. Jul 6 23:37:44.358478 sshd[5219]: Accepted publickey for core from 139.178.68.195 port 44106 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:44.359934 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:44.364970 systemd-logind[1891]: New session 26 of user core. Jul 6 23:37:44.371943 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:37:44.404316 kubelet[3163]: I0706 23:37:44.403925 3163 setters.go:600] "Node became not ready" node="ip-172-31-20-250" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:37:44Z","lastTransitionTime":"2025-07-06T23:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:37:44.526642 sshd[5226]: Connection closed by 139.178.68.195 port 44106 Jul 6 23:37:44.527225 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:44.531493 systemd[1]: sshd@25-172.31.20.250:22-139.178.68.195:44106.service: Deactivated successfully. Jul 6 23:37:44.533434 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:37:44.534445 systemd-logind[1891]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:37:44.535812 systemd-logind[1891]: Removed session 26. Jul 6 23:37:44.562964 systemd[1]: Started sshd@26-172.31.20.250:22-139.178.68.195:44118.service - OpenSSH per-connection server daemon (139.178.68.195:44118). Jul 6 23:37:44.573107 containerd[1917]: time="2025-07-06T23:37:44.573061998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gphn5,Uid:eca6229a-7928-4498-b7f6-2eafa127e075,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:44.607556 containerd[1917]: time="2025-07-06T23:37:44.607449624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:37:44.607556 containerd[1917]: time="2025-07-06T23:37:44.607496934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:37:44.607556 containerd[1917]: time="2025-07-06T23:37:44.607507194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:37:44.607852 containerd[1917]: time="2025-07-06T23:37:44.607572890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:37:44.624492 systemd[1]: Started cri-containerd-a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19.scope - libcontainer container a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19. Jul 6 23:37:44.649914 containerd[1917]: time="2025-07-06T23:37:44.649784396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gphn5,Uid:eca6229a-7928-4498-b7f6-2eafa127e075,Namespace:kube-system,Attempt:0,} returns sandbox id \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\"" Jul 6 23:37:44.652668 containerd[1917]: time="2025-07-06T23:37:44.652633486Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:37:44.673852 containerd[1917]: time="2025-07-06T23:37:44.673722128Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478\"" Jul 6 23:37:44.675750 containerd[1917]: time="2025-07-06T23:37:44.675711005Z" level=info msg="StartContainer for \"85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478\"" Jul 6 23:37:44.708869 systemd[1]: Started cri-containerd-85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478.scope - libcontainer container 85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478. Jul 6 23:37:44.725116 sshd[5233]: Accepted publickey for core from 139.178.68.195 port 44118 ssh2: RSA SHA256:WDCe1Z8jdy52mDEipUDwQgQqGFyq6k8s7RXm+D/II8I Jul 6 23:37:44.728560 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:44.740159 systemd-logind[1891]: New session 27 of user core. Jul 6 23:37:44.748534 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:37:44.754849 containerd[1917]: time="2025-07-06T23:37:44.754209899Z" level=info msg="StartContainer for \"85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478\" returns successfully" Jul 6 23:37:44.768095 systemd[1]: cri-containerd-85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478.scope: Deactivated successfully. Jul 6 23:37:44.768958 systemd[1]: cri-containerd-85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478.scope: Consumed 23ms CPU time, 9.6M memory peak, 3.1M read from disk. Jul 6 23:37:44.829758 containerd[1917]: time="2025-07-06T23:37:44.829442651Z" level=info msg="shim disconnected" id=85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478 namespace=k8s.io Jul 6 23:37:44.829758 containerd[1917]: time="2025-07-06T23:37:44.829490911Z" level=warning msg="cleaning up after shim disconnected" id=85002dcb866ee0a2d5e26e1df4f18fb36dc1389a4439800f9fbbb75640403478 namespace=k8s.io Jul 6 23:37:44.829758 containerd[1917]: time="2025-07-06T23:37:44.829499669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:45.703551 containerd[1917]: time="2025-07-06T23:37:45.703406068Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:37:45.726019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885563295.mount: Deactivated successfully. Jul 6 23:37:45.729354 containerd[1917]: time="2025-07-06T23:37:45.727186741Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801\"" Jul 6 23:37:45.730011 containerd[1917]: time="2025-07-06T23:37:45.729935351Z" level=info msg="StartContainer for \"afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801\"" Jul 6 23:37:45.768606 systemd[1]: Started cri-containerd-afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801.scope - libcontainer container afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801. Jul 6 23:37:45.810327 containerd[1917]: time="2025-07-06T23:37:45.809883775Z" level=info msg="StartContainer for \"afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801\" returns successfully" Jul 6 23:37:45.823074 systemd[1]: cri-containerd-afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801.scope: Deactivated successfully. Jul 6 23:37:45.823440 systemd[1]: cri-containerd-afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801.scope: Consumed 20ms CPU time, 7.5M memory peak, 2.1M read from disk. Jul 6 23:37:45.860731 containerd[1917]: time="2025-07-06T23:37:45.860507121Z" level=info msg="shim disconnected" id=afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801 namespace=k8s.io Jul 6 23:37:45.860731 containerd[1917]: time="2025-07-06T23:37:45.860557993Z" level=warning msg="cleaning up after shim disconnected" id=afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801 namespace=k8s.io Jul 6 23:37:45.860731 containerd[1917]: time="2025-07-06T23:37:45.860566170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:46.374208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc5a87bb5cd12569399812a808f06b87d6c949741a02d1a4a5109ac4587d801-rootfs.mount: Deactivated successfully. Jul 6 23:37:46.707425 containerd[1917]: time="2025-07-06T23:37:46.707359611Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:37:46.735685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996627744.mount: Deactivated successfully. Jul 6 23:37:46.739317 containerd[1917]: time="2025-07-06T23:37:46.737689629Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb\"" Jul 6 23:37:46.740603 containerd[1917]: time="2025-07-06T23:37:46.740563731Z" level=info msg="StartContainer for \"4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb\"" Jul 6 23:37:46.800188 systemd[1]: Started cri-containerd-4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb.scope - libcontainer container 4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb. Jul 6 23:37:46.858448 containerd[1917]: time="2025-07-06T23:37:46.858395867Z" level=info msg="StartContainer for \"4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb\" returns successfully" Jul 6 23:37:46.866540 systemd[1]: cri-containerd-4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb.scope: Deactivated successfully. Jul 6 23:37:46.906257 containerd[1917]: time="2025-07-06T23:37:46.906189931Z" level=info msg="shim disconnected" id=4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb namespace=k8s.io Jul 6 23:37:46.906257 containerd[1917]: time="2025-07-06T23:37:46.906248056Z" level=warning msg="cleaning up after shim disconnected" id=4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb namespace=k8s.io Jul 6 23:37:46.906257 containerd[1917]: time="2025-07-06T23:37:46.906256960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:47.374414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4238c2e1b00681f13029a5772eaef079c50005f0a0d322540f3719b4a32182fb-rootfs.mount: Deactivated successfully. Jul 6 23:37:47.421240 kubelet[3163]: E0706 23:37:47.421128 3163 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:37:47.710402 containerd[1917]: time="2025-07-06T23:37:47.710367823Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:37:47.736058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517345071.mount: Deactivated successfully. Jul 6 23:37:47.740470 containerd[1917]: time="2025-07-06T23:37:47.740418568Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed\"" Jul 6 23:37:47.741017 containerd[1917]: time="2025-07-06T23:37:47.740987931Z" level=info msg="StartContainer for \"4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed\"" Jul 6 23:37:47.786538 systemd[1]: Started cri-containerd-4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed.scope - libcontainer container 4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed. Jul 6 23:37:47.836497 systemd[1]: cri-containerd-4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed.scope: Deactivated successfully. Jul 6 23:37:47.842275 containerd[1917]: time="2025-07-06T23:37:47.842102486Z" level=info msg="StartContainer for \"4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed\" returns successfully" Jul 6 23:37:47.872157 containerd[1917]: time="2025-07-06T23:37:47.872072806Z" level=info msg="shim disconnected" id=4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed namespace=k8s.io Jul 6 23:37:47.872157 containerd[1917]: time="2025-07-06T23:37:47.872136999Z" level=warning msg="cleaning up after shim disconnected" id=4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed namespace=k8s.io Jul 6 23:37:47.872157 containerd[1917]: time="2025-07-06T23:37:47.872146455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:37:48.374446 systemd[1]: run-containerd-runc-k8s.io-4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed-runc.Gu5jmW.mount: Deactivated successfully. Jul 6 23:37:48.374559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f4cbc1bbe6a3360e8a1892b5b7b893d2325fae3c2b446f735f002086f1228ed-rootfs.mount: Deactivated successfully. Jul 6 23:37:48.715608 containerd[1917]: time="2025-07-06T23:37:48.715568273Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:37:48.744675 containerd[1917]: time="2025-07-06T23:37:48.744536830Z" level=info msg="CreateContainer within sandbox \"a58f7684d8b983a8548db7bc436719fbe778a885ce6f3ec6c80e09e04ada4e19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129\"" Jul 6 23:37:48.745338 containerd[1917]: time="2025-07-06T23:37:48.745088654Z" level=info msg="StartContainer for \"6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129\"" Jul 6 23:37:48.778556 systemd[1]: Started cri-containerd-6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129.scope - libcontainer container 6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129. Jul 6 23:37:48.824469 containerd[1917]: time="2025-07-06T23:37:48.824417195Z" level=info msg="StartContainer for \"6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129\" returns successfully" Jul 6 23:37:49.506700 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:37:49.736606 kubelet[3163]: I0706 23:37:49.736551 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gphn5" podStartSLOduration=5.736527558 podStartE2EDuration="5.736527558s" podCreationTimestamp="2025-07-06 23:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:49.734181152 +0000 UTC m=+97.587804030" watchObservedRunningTime="2025-07-06 23:37:49.736527558 +0000 UTC m=+97.590150396" Jul 6 23:37:52.652361 systemd-networkd[1827]: lxc_health: Link UP Jul 6 23:37:52.666631 (udev-worker)[6079]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:37:52.669669 systemd-networkd[1827]: lxc_health: Gained carrier Jul 6 23:37:53.658574 systemd[1]: run-containerd-runc-k8s.io-6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129-runc.KS3cYG.mount: Deactivated successfully. Jul 6 23:37:54.596500 systemd-networkd[1827]: lxc_health: Gained IPv6LL Jul 6 23:37:55.935943 systemd[1]: run-containerd-runc-k8s.io-6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129-runc.3qWQdL.mount: Deactivated successfully. Jul 6 23:37:57.576151 ntpd[1886]: Listen normally on 15 lxc_health [fe80::7c26:98ff:fe0a:e456%14]:123 Jul 6 23:37:57.577394 ntpd[1886]: 6 Jul 23:37:57 ntpd[1886]: Listen normally on 15 lxc_health [fe80::7c26:98ff:fe0a:e456%14]:123 Jul 6 23:37:58.100361 systemd[1]: run-containerd-runc-k8s.io-6cb52eaf88ae6b548cd9680f55c3c9ccbca38713a6111321bac785bf0f8ac129-runc.vsmTxw.mount: Deactivated successfully. Jul 6 23:37:58.218728 sshd[5308]: Connection closed by 139.178.68.195 port 44118 Jul 6 23:37:58.220777 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:58.225303 systemd[1]: sshd@26-172.31.20.250:22-139.178.68.195:44118.service: Deactivated successfully. Jul 6 23:37:58.227883 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:37:58.228894 systemd-logind[1891]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:37:58.230226 systemd-logind[1891]: Removed session 27. Jul 6 23:38:12.331053 containerd[1917]: time="2025-07-06T23:38:12.331012533Z" level=info msg="StopPodSandbox for \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\"" Jul 6 23:38:12.331621 containerd[1917]: time="2025-07-06T23:38:12.331104338Z" level=info msg="TearDown network for sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" successfully" Jul 6 23:38:12.331621 containerd[1917]: time="2025-07-06T23:38:12.331114537Z" level=info msg="StopPodSandbox for \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" returns successfully" Jul 6 23:38:12.331621 containerd[1917]: time="2025-07-06T23:38:12.331475703Z" level=info msg="RemovePodSandbox for \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\"" Jul 6 23:38:12.331621 containerd[1917]: time="2025-07-06T23:38:12.331504276Z" level=info msg="Forcibly stopping sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\"" Jul 6 23:38:12.331621 containerd[1917]: time="2025-07-06T23:38:12.331551721Z" level=info msg="TearDown network for sandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" successfully" Jul 6 23:38:12.336925 containerd[1917]: time="2025-07-06T23:38:12.336878895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:38:12.337058 containerd[1917]: time="2025-07-06T23:38:12.336945535Z" level=info msg="RemovePodSandbox \"0a21d4464bb8834566f9cbc5783e89f8e80e6a83ee0ff3c92c95c02aa2788a55\" returns successfully" Jul 6 23:38:12.337683 containerd[1917]: time="2025-07-06T23:38:12.337484988Z" level=info msg="StopPodSandbox for \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\"" Jul 6 23:38:12.337683 containerd[1917]: time="2025-07-06T23:38:12.337567207Z" level=info msg="TearDown network for sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" successfully" Jul 6 23:38:12.337683 containerd[1917]: time="2025-07-06T23:38:12.337619252Z" level=info msg="StopPodSandbox for \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" returns successfully" Jul 6 23:38:12.337931 containerd[1917]: time="2025-07-06T23:38:12.337905811Z" level=info msg="RemovePodSandbox for \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\"" Jul 6 23:38:12.337969 containerd[1917]: time="2025-07-06T23:38:12.337933792Z" level=info msg="Forcibly stopping sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\"" Jul 6 23:38:12.338019 containerd[1917]: time="2025-07-06T23:38:12.337983740Z" level=info msg="TearDown network for sandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" successfully" Jul 6 23:38:12.343102 containerd[1917]: time="2025-07-06T23:38:12.343041903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:38:12.343102 containerd[1917]: time="2025-07-06T23:38:12.343096683Z" level=info msg="RemovePodSandbox \"171e66717aa88d542f442c89ead943010778debd5a8e72d0b83dde98a5043ca2\" returns successfully" Jul 6 23:38:24.312838 kubelet[3163]: E0706 23:38:24.312757 3163 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": context deadline exceeded" Jul 6 23:38:24.806225 systemd[1]: cri-containerd-0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2.scope: Deactivated successfully. Jul 6 23:38:24.806846 systemd[1]: cri-containerd-0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2.scope: Consumed 3.646s CPU time, 80.8M memory peak, 35.3M read from disk. Jul 6 23:38:24.829054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2-rootfs.mount: Deactivated successfully. Jul 6 23:38:24.862090 containerd[1917]: time="2025-07-06T23:38:24.862024270Z" level=info msg="shim disconnected" id=0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2 namespace=k8s.io Jul 6 23:38:24.862090 containerd[1917]: time="2025-07-06T23:38:24.862076169Z" level=warning msg="cleaning up after shim disconnected" id=0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2 namespace=k8s.io Jul 6 23:38:24.862090 containerd[1917]: time="2025-07-06T23:38:24.862084512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:38:25.806047 kubelet[3163]: I0706 23:38:25.805837 3163 scope.go:117] "RemoveContainer" containerID="0143c1c42e53d1a8736024e9e2df784d80760236f670e8c35789a3887e521eb2" Jul 6 23:38:25.809918 containerd[1917]: time="2025-07-06T23:38:25.809868178Z" level=info msg="CreateContainer within sandbox \"5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:38:25.828711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517147433.mount: Deactivated successfully. Jul 6 23:38:25.835022 containerd[1917]: time="2025-07-06T23:38:25.834972662Z" level=info msg="CreateContainer within sandbox \"5f0eba891a7bd22d71a9a70ad81d9bc786eceaef881a35ec51d859a60eee9696\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"36e261bd946d674b52c6c28b2c35e3f1853802f55a939971365c0bf54ea93e22\"" Jul 6 23:38:25.835557 containerd[1917]: time="2025-07-06T23:38:25.835483402Z" level=info msg="StartContainer for \"36e261bd946d674b52c6c28b2c35e3f1853802f55a939971365c0bf54ea93e22\"" Jul 6 23:38:25.871532 systemd[1]: Started cri-containerd-36e261bd946d674b52c6c28b2c35e3f1853802f55a939971365c0bf54ea93e22.scope - libcontainer container 36e261bd946d674b52c6c28b2c35e3f1853802f55a939971365c0bf54ea93e22. Jul 6 23:38:25.926373 containerd[1917]: time="2025-07-06T23:38:25.926325973Z" level=info msg="StartContainer for \"36e261bd946d674b52c6c28b2c35e3f1853802f55a939971365c0bf54ea93e22\" returns successfully" Jul 6 23:38:30.197505 systemd[1]: cri-containerd-65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb.scope: Deactivated successfully. Jul 6 23:38:30.197785 systemd[1]: cri-containerd-65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb.scope: Consumed 1.303s CPU time, 28.3M memory peak, 10.8M read from disk. Jul 6 23:38:30.224202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb-rootfs.mount: Deactivated successfully. Jul 6 23:38:30.248627 containerd[1917]: time="2025-07-06T23:38:30.248545340Z" level=info msg="shim disconnected" id=65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb namespace=k8s.io Jul 6 23:38:30.248627 containerd[1917]: time="2025-07-06T23:38:30.248622818Z" level=warning msg="cleaning up after shim disconnected" id=65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb namespace=k8s.io Jul 6 23:38:30.248627 containerd[1917]: time="2025-07-06T23:38:30.248632835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:38:30.265456 containerd[1917]: time="2025-07-06T23:38:30.265403132Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:38:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:38:30.817859 kubelet[3163]: I0706 23:38:30.817809 3163 scope.go:117] "RemoveContainer" containerID="65241c3467a7e20f193cfc5510c455a3e14b9639e4ddb8c9b302ad16fc9fddbb" Jul 6 23:38:30.819688 containerd[1917]: time="2025-07-06T23:38:30.819654475Z" level=info msg="CreateContainer within sandbox \"9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:38:30.843415 containerd[1917]: time="2025-07-06T23:38:30.843365642Z" level=info msg="CreateContainer within sandbox \"9a8f169b140118fe4b196f3b3dbd03e93e2b8f204d25cfa5bd0bb88a2a5746ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"833b5d076ebadc6f37f8da95bd7304fe0516ffa6e6a247c955dd2cd887a024ca\"" Jul 6 23:38:30.843954 containerd[1917]: time="2025-07-06T23:38:30.843896073Z" level=info msg="StartContainer for \"833b5d076ebadc6f37f8da95bd7304fe0516ffa6e6a247c955dd2cd887a024ca\"" Jul 6 23:38:30.902534 systemd[1]: Started cri-containerd-833b5d076ebadc6f37f8da95bd7304fe0516ffa6e6a247c955dd2cd887a024ca.scope - libcontainer container 833b5d076ebadc6f37f8da95bd7304fe0516ffa6e6a247c955dd2cd887a024ca. Jul 6 23:38:30.950428 containerd[1917]: time="2025-07-06T23:38:30.950383219Z" level=info msg="StartContainer for \"833b5d076ebadc6f37f8da95bd7304fe0516ffa6e6a247c955dd2cd887a024ca\" returns successfully" Jul 6 23:38:34.316776 kubelet[3163]: E0706 23:38:34.316727 3163 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-250?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"