May 16 00:02:35.912772 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 16 00:02:35.912810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:02:35.912830 kernel: BIOS-provided physical RAM map: May 16 00:02:35.912842 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:02:35.912853 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 16 00:02:35.912865 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 16 00:02:35.912879 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 16 00:02:35.912892 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 16 00:02:35.912905 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 16 00:02:35.912920 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 16 00:02:35.912932 kernel: NX (Execute Disable) protection: active May 16 00:02:35.912944 kernel: APIC: Static calls initialized May 16 00:02:35.912956 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 16 00:02:35.912970 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 16 00:02:35.912985 kernel: extended physical RAM map: May 16 00:02:35.913002 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:02:35.913016 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 16 00:02:35.913030 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 16 00:02:35.913043 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 16 00:02:35.913056 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 16 00:02:35.913070 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 16 00:02:35.913084 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 16 00:02:35.913098 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 16 00:02:35.913111 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 16 00:02:35.913125 kernel: efi: EFI v2.7 by EDK II May 16 00:02:35.913139 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 16 00:02:35.913155 kernel: secureboot: Secure boot disabled May 16 00:02:35.913169 kernel: SMBIOS 2.7 present. May 16 00:02:35.913182 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 16 00:02:35.913196 kernel: Hypervisor detected: KVM May 16 00:02:35.913225 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:02:35.913240 kernel: kvm-clock: using sched offset of 3806499676 cycles May 16 00:02:35.913254 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:02:35.913268 kernel: tsc: Detected 2499.996 MHz processor May 16 00:02:35.913282 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:02:35.913296 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:02:35.913313 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 16 00:02:35.913327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 00:02:35.913341 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:02:35.913356 kernel: Using GB pages for direct mapping May 16 00:02:35.913376 kernel: ACPI: Early table checksum verification disabled May 16 00:02:35.913391 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 16 00:02:35.913406 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 16 00:02:35.913423 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 16 00:02:35.913438 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 16 00:02:35.913453 kernel: ACPI: FACS 0x00000000789D0000 000040 May 16 00:02:35.913468 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 16 00:02:35.913483 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 16 00:02:35.913498 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 16 00:02:35.913513 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 16 00:02:35.913531 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 16 00:02:35.913546 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 16 00:02:35.913561 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 16 00:02:35.913576 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 16 00:02:35.913590 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 16 00:02:35.913605 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 16 00:02:35.913620 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 16 00:02:35.913635 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 16 00:02:35.913650 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 16 00:02:35.913668 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 16 00:02:35.913683 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 16 00:02:35.913698 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 16 00:02:35.913712 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 16 00:02:35.913727 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 16 00:02:35.913742 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 16 00:02:35.913757 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 16 00:02:35.913772 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 16 00:02:35.913787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 16 00:02:35.913804 kernel: NUMA: Initialized distance table, cnt=1 May 16 00:02:35.913818 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 16 00:02:35.913833 kernel: Zone ranges: May 16 00:02:35.913848 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:02:35.913863 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 16 00:02:35.913878 kernel: Normal empty May 16 00:02:35.913893 kernel: Movable zone start for each node May 16 00:02:35.913908 kernel: Early memory node ranges May 16 00:02:35.913923 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 00:02:35.913937 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 16 00:02:35.913955 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 16 00:02:35.913970 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 16 00:02:35.913985 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:02:35.914006 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 00:02:35.914021 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 16 00:02:35.914037 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 16 00:02:35.914051 kernel: ACPI: PM-Timer IO Port: 0xb008 May 16 00:02:35.914066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:02:35.914081 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 16 00:02:35.914099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:02:35.914114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:02:35.914128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:02:35.914143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:02:35.914158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:02:35.914173 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:02:35.914188 kernel: TSC deadline timer available May 16 00:02:35.915800 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 00:02:35.915826 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:02:35.915847 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 16 00:02:35.915861 kernel: Booting paravirtualized kernel on KVM May 16 00:02:35.915877 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:02:35.915891 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 00:02:35.915906 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 00:02:35.915920 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 00:02:35.915934 kernel: pcpu-alloc: [0] 0 1 May 16 00:02:35.915948 kernel: kvm-guest: PV spinlocks enabled May 16 00:02:35.915962 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:02:35.915982 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:02:35.915997 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:02:35.916012 kernel: random: crng init done May 16 00:02:35.916026 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:02:35.916040 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 16 00:02:35.916055 kernel: Fallback order for Node 0: 0 May 16 00:02:35.916069 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 16 00:02:35.916084 kernel: Policy zone: DMA32 May 16 00:02:35.916102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:02:35.916117 kernel: Memory: 1874580K/2037804K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 162968K reserved, 0K cma-reserved) May 16 00:02:35.916131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 00:02:35.916145 kernel: Kernel/User page tables isolation: enabled May 16 00:02:35.916161 kernel: ftrace: allocating 37950 entries in 149 pages May 16 00:02:35.916187 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:02:35.916230 kernel: Dynamic Preempt: voluntary May 16 00:02:35.916245 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:02:35.916274 kernel: rcu: RCU event tracing is enabled. May 16 00:02:35.916306 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 00:02:35.916332 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:02:35.916350 kernel: Rude variant of Tasks RCU enabled. May 16 00:02:35.916366 kernel: Tracing variant of Tasks RCU enabled. May 16 00:02:35.916382 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:02:35.916397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 00:02:35.916411 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 00:02:35.916426 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:02:35.916444 kernel: Console: colour dummy device 80x25 May 16 00:02:35.916460 kernel: printk: console [tty0] enabled May 16 00:02:35.916474 kernel: printk: console [ttyS0] enabled May 16 00:02:35.916488 kernel: ACPI: Core revision 20230628 May 16 00:02:35.916504 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 16 00:02:35.916517 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:02:35.916536 kernel: x2apic enabled May 16 00:02:35.916555 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:02:35.916568 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 16 00:02:35.916586 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 16 00:02:35.916599 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 16 00:02:35.916614 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 16 00:02:35.916630 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:02:35.916643 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:02:35.916655 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:02:35.916669 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 16 00:02:35.916684 kernel: RETBleed: Vulnerable May 16 00:02:35.916699 kernel: Speculative Store Bypass: Vulnerable May 16 00:02:35.916719 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 16 00:02:35.916732 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 16 00:02:35.916745 kernel: GDS: Unknown: Dependent on hypervisor status May 16 00:02:35.916761 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:02:35.916778 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:02:35.916794 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:02:35.916810 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 16 00:02:35.916827 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 16 00:02:35.916843 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 16 00:02:35.916859 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 16 00:02:35.916875 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 16 00:02:35.916894 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 16 00:02:35.916911 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:02:35.916926 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 16 00:02:35.916942 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 16 00:02:35.916958 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 16 00:02:35.916974 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 16 00:02:35.916990 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 16 00:02:35.917007 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 16 00:02:35.917023 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 16 00:02:35.917039 kernel: Freeing SMP alternatives memory: 32K May 16 00:02:35.917055 kernel: pid_max: default: 32768 minimum: 301 May 16 00:02:35.917071 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:02:35.917091 kernel: landlock: Up and running. May 16 00:02:35.917107 kernel: SELinux: Initializing. May 16 00:02:35.917123 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:02:35.917139 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:02:35.917154 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 16 00:02:35.917169 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:02:35.917185 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:02:35.917228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:02:35.917246 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 16 00:02:35.917262 kernel: signal: max sigframe size: 3632 May 16 00:02:35.917283 kernel: rcu: Hierarchical SRCU implementation. May 16 00:02:35.917299 kernel: rcu: Max phase no-delay instances is 400. May 16 00:02:35.917315 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 16 00:02:35.917332 kernel: smp: Bringing up secondary CPUs ... May 16 00:02:35.917348 kernel: smpboot: x86: Booting SMP configuration: May 16 00:02:35.917363 kernel: .... node #0, CPUs: #1 May 16 00:02:35.917378 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 16 00:02:35.917393 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 16 00:02:35.917410 kernel: smp: Brought up 1 node, 2 CPUs May 16 00:02:35.917424 kernel: smpboot: Max logical packages: 1 May 16 00:02:35.917439 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 16 00:02:35.917453 kernel: devtmpfs: initialized May 16 00:02:35.917468 kernel: x86/mm: Memory block size: 128MB May 16 00:02:35.917482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 16 00:02:35.917497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:02:35.917512 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 00:02:35.917527 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:02:35.917544 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:02:35.917560 kernel: audit: initializing netlink subsys (disabled) May 16 00:02:35.917575 kernel: audit: type=2000 audit(1747353755.987:1): state=initialized audit_enabled=0 res=1 May 16 00:02:35.917590 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:02:35.917604 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:02:35.917619 kernel: cpuidle: using governor menu May 16 00:02:35.917633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:02:35.917647 kernel: dca service started, version 1.12.1 May 16 00:02:35.917661 kernel: PCI: Using configuration type 1 for base access May 16 00:02:35.917681 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:02:35.917697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:02:35.917712 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:02:35.917728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:02:35.917744 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:02:35.917758 kernel: ACPI: Added _OSI(Module Device) May 16 00:02:35.917775 kernel: ACPI: Added _OSI(Processor Device) May 16 00:02:35.917791 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:02:35.917805 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:02:35.917825 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 16 00:02:35.917840 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:02:35.917855 kernel: ACPI: Interpreter enabled May 16 00:02:35.917870 kernel: ACPI: PM: (supports S0 S5) May 16 00:02:35.917885 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:02:35.917900 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:02:35.917916 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:02:35.917931 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 16 00:02:35.917945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:02:35.918186 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 16 00:02:35.918368 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 16 00:02:35.918509 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 16 00:02:35.918527 kernel: acpiphp: Slot [3] registered May 16 00:02:35.918541 kernel: acpiphp: Slot [4] registered May 16 00:02:35.918556 kernel: acpiphp: Slot [5] registered May 16 00:02:35.918570 kernel: acpiphp: Slot [6] registered May 16 00:02:35.918584 kernel: acpiphp: Slot [7] registered May 16 00:02:35.918603 kernel: acpiphp: Slot [8] registered May 16 00:02:35.918618 kernel: acpiphp: Slot [9] registered May 16 00:02:35.918632 kernel: acpiphp: Slot [10] registered May 16 00:02:35.918646 kernel: acpiphp: Slot [11] registered May 16 00:02:35.918660 kernel: acpiphp: Slot [12] registered May 16 00:02:35.918674 kernel: acpiphp: Slot [13] registered May 16 00:02:35.918688 kernel: acpiphp: Slot [14] registered May 16 00:02:35.918703 kernel: acpiphp: Slot [15] registered May 16 00:02:35.918717 kernel: acpiphp: Slot [16] registered May 16 00:02:35.918735 kernel: acpiphp: Slot [17] registered May 16 00:02:35.918749 kernel: acpiphp: Slot [18] registered May 16 00:02:35.918764 kernel: acpiphp: Slot [19] registered May 16 00:02:35.918778 kernel: acpiphp: Slot [20] registered May 16 00:02:35.918793 kernel: acpiphp: Slot [21] registered May 16 00:02:35.918807 kernel: acpiphp: Slot [22] registered May 16 00:02:35.918821 kernel: acpiphp: Slot [23] registered May 16 00:02:35.918835 kernel: acpiphp: Slot [24] registered May 16 00:02:35.918849 kernel: acpiphp: Slot [25] registered May 16 00:02:35.919260 kernel: acpiphp: Slot [26] registered May 16 00:02:35.919291 kernel: acpiphp: Slot [27] registered May 16 00:02:35.919309 kernel: acpiphp: Slot [28] registered May 16 00:02:35.919325 kernel: acpiphp: Slot [29] registered May 16 00:02:35.919342 kernel: acpiphp: Slot [30] registered May 16 00:02:35.919360 kernel: acpiphp: Slot [31] registered May 16 00:02:35.919377 kernel: PCI host bridge to bus 0000:00 May 16 00:02:35.919560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:02:35.919699 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:02:35.919838 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:02:35.919971 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 16 00:02:35.920101 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 16 00:02:35.922300 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:02:35.922487 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 16 00:02:35.922670 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 16 00:02:35.922850 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 16 00:02:35.922997 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 16 00:02:35.923135 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 16 00:02:35.923301 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 16 00:02:35.923440 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 16 00:02:35.923575 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 16 00:02:35.923712 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 16 00:02:35.923852 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 16 00:02:35.923999 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 16 00:02:35.924137 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 16 00:02:35.924297 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 16 00:02:35.924435 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 16 00:02:35.924571 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:02:35.924716 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 16 00:02:35.924860 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 16 00:02:35.925003 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 16 00:02:35.925142 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 16 00:02:35.925163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:02:35.925180 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:02:35.925198 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:02:35.928577 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:02:35.928594 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 16 00:02:35.928616 kernel: iommu: Default domain type: Translated May 16 00:02:35.928631 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:02:35.928646 kernel: efivars: Registered efivars operations May 16 00:02:35.928662 kernel: PCI: Using ACPI for IRQ routing May 16 00:02:35.928677 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:02:35.928692 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 16 00:02:35.928706 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 16 00:02:35.928720 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 16 00:02:35.928922 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 16 00:02:35.929066 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 16 00:02:35.929200 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:02:35.929247 kernel: vgaarb: loaded May 16 00:02:35.929266 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 16 00:02:35.929282 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 16 00:02:35.929299 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:02:35.929315 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:02:35.929332 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:02:35.929348 kernel: pnp: PnP ACPI init May 16 00:02:35.929369 kernel: pnp: PnP ACPI: found 5 devices May 16 00:02:35.929386 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:02:35.929402 kernel: NET: Registered PF_INET protocol family May 16 00:02:35.929416 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:02:35.929433 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 16 00:02:35.929449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:02:35.929465 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 16 00:02:35.929481 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 16 00:02:35.929497 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 16 00:02:35.929517 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:02:35.929533 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:02:35.929550 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:02:35.929566 kernel: NET: Registered PF_XDP protocol family May 16 00:02:35.929706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:02:35.929829 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:02:35.929951 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:02:35.930079 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 16 00:02:35.930221 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 16 00:02:35.930361 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 00:02:35.930381 kernel: PCI: CLS 0 bytes, default 64 May 16 00:02:35.930397 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 16 00:02:35.930413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 16 00:02:35.930428 kernel: clocksource: Switched to clocksource tsc May 16 00:02:35.930444 kernel: Initialise system trusted keyrings May 16 00:02:35.930459 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 16 00:02:35.930479 kernel: Key type asymmetric registered May 16 00:02:35.930493 kernel: Asymmetric key parser 'x509' registered May 16 00:02:35.930509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:02:35.930523 kernel: io scheduler mq-deadline registered May 16 00:02:35.930536 kernel: io scheduler kyber registered May 16 00:02:35.930551 kernel: io scheduler bfq registered May 16 00:02:35.930567 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:02:35.930585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:02:35.930602 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:02:35.930619 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:02:35.930640 kernel: i8042: Warning: Keylock active May 16 00:02:35.930657 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:02:35.930674 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:02:35.930851 kernel: rtc_cmos 00:00: RTC can wake from S4 May 16 00:02:35.930979 kernel: rtc_cmos 00:00: registered as rtc0 May 16 00:02:35.931099 kernel: rtc_cmos 00:00: setting system clock to 2025-05-16T00:02:35 UTC (1747353755) May 16 00:02:35.933347 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 16 00:02:35.933402 kernel: intel_pstate: CPU model not supported May 16 00:02:35.933420 kernel: efifb: probing for efifb May 16 00:02:35.933437 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 16 00:02:35.933454 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 16 00:02:35.933494 kernel: efifb: scrolling: redraw May 16 00:02:35.933514 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:02:35.933532 kernel: Console: switching to colour frame buffer device 100x37 May 16 00:02:35.933550 kernel: fb0: EFI VGA frame buffer device May 16 00:02:35.933567 kernel: pstore: Using crash dump compression: deflate May 16 00:02:35.933587 kernel: pstore: Registered efi_pstore as persistent store backend May 16 00:02:35.933604 kernel: NET: Registered PF_INET6 protocol family May 16 00:02:35.933622 kernel: Segment Routing with IPv6 May 16 00:02:35.933639 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:02:35.933657 kernel: NET: Registered PF_PACKET protocol family May 16 00:02:35.933674 kernel: Key type dns_resolver registered May 16 00:02:35.933691 kernel: IPI shorthand broadcast: enabled May 16 00:02:35.933708 kernel: sched_clock: Marking stable (477002157, 137245056)->(680600527, -66353314) May 16 00:02:35.933725 kernel: registered taskstats version 1 May 16 00:02:35.933746 kernel: Loading compiled-in X.509 certificates May 16 00:02:35.933766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 16 00:02:35.933784 kernel: Key type .fscrypt registered May 16 00:02:35.933801 kernel: Key type fscrypt-provisioning registered May 16 00:02:35.933818 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:02:35.933835 kernel: ima: Allocated hash algorithm: sha1 May 16 00:02:35.933852 kernel: ima: No architecture policies found May 16 00:02:35.933870 kernel: clk: Disabling unused clocks May 16 00:02:35.933887 kernel: Freeing unused kernel image (initmem) memory: 42988K May 16 00:02:35.933908 kernel: Write protecting the kernel read-only data: 36864k May 16 00:02:35.933925 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 16 00:02:35.933943 kernel: Run /init as init process May 16 00:02:35.933961 kernel: with arguments: May 16 00:02:35.933978 kernel: /init May 16 00:02:35.933994 kernel: with environment: May 16 00:02:35.934022 kernel: HOME=/ May 16 00:02:35.934041 kernel: TERM=linux May 16 00:02:35.934058 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:02:35.934082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:02:35.934104 systemd[1]: Detected virtualization amazon. May 16 00:02:35.934122 systemd[1]: Detected architecture x86-64. May 16 00:02:35.934139 systemd[1]: Running in initrd. May 16 00:02:35.934157 systemd[1]: No hostname configured, using default hostname. May 16 00:02:35.934178 systemd[1]: Hostname set to . May 16 00:02:35.934197 systemd[1]: Initializing machine ID from VM UUID. May 16 00:02:35.934225 systemd[1]: Queued start job for default target initrd.target. May 16 00:02:35.934244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:02:35.934263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:02:35.934282 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:02:35.934300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:02:35.934322 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:02:35.934341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:02:35.934361 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:02:35.934380 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:02:35.934398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:02:35.934415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:02:35.934434 systemd[1]: Reached target paths.target - Path Units. May 16 00:02:35.934456 systemd[1]: Reached target slices.target - Slice Units. May 16 00:02:35.934473 systemd[1]: Reached target swap.target - Swaps. May 16 00:02:35.934492 systemd[1]: Reached target timers.target - Timer Units. May 16 00:02:35.934510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:02:35.934528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:02:35.934545 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:02:35.934564 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:02:35.934583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:02:35.934606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:02:35.934624 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:02:35.934642 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:02:35.934661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:02:35.934679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:02:35.934697 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:02:35.934714 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:02:35.934732 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:02:35.934750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:02:35.934772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:02:35.934790 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:02:35.934808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:02:35.934855 systemd-journald[179]: Collecting audit messages is disabled. May 16 00:02:35.934901 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:02:35.934921 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:02:35.934941 systemd-journald[179]: Journal started May 16 00:02:35.934982 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b6a6064aceb75a7b87993655d68bc) is 4.7M, max 38.2M, 33.4M free. May 16 00:02:35.916588 systemd-modules-load[180]: Inserted module 'overlay' May 16 00:02:35.948226 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:02:35.949066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:35.951459 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:02:35.967231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:02:35.969579 kernel: Bridge firewalling registered May 16 00:02:35.968539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:02:35.968819 systemd-modules-load[180]: Inserted module 'br_netfilter' May 16 00:02:35.972390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:02:35.980436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:02:35.982233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:02:35.996899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:02:35.998723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:02:36.006589 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:02:36.008286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:02:36.015005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:02:36.016939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:02:36.018930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:02:36.026424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:02:36.039833 dracut-cmdline[210]: dracut-dracut-053 May 16 00:02:36.043937 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:02:36.080857 systemd-resolved[215]: Positive Trust Anchors: May 16 00:02:36.080875 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:02:36.080934 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:02:36.091560 systemd-resolved[215]: Defaulting to hostname 'linux'. May 16 00:02:36.093806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:02:36.095166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:02:36.136245 kernel: SCSI subsystem initialized May 16 00:02:36.146238 kernel: Loading iSCSI transport class v2.0-870. May 16 00:02:36.158238 kernel: iscsi: registered transport (tcp) May 16 00:02:36.179467 kernel: iscsi: registered transport (qla4xxx) May 16 00:02:36.179550 kernel: QLogic iSCSI HBA Driver May 16 00:02:36.219952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:02:36.228450 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:02:36.253471 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:02:36.253551 kernel: device-mapper: uevent: version 1.0.3 May 16 00:02:36.256231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:02:36.297234 kernel: raid6: avx512x4 gen() 17692 MB/s May 16 00:02:36.315234 kernel: raid6: avx512x2 gen() 17589 MB/s May 16 00:02:36.333231 kernel: raid6: avx512x1 gen() 17563 MB/s May 16 00:02:36.350232 kernel: raid6: avx2x4 gen() 17510 MB/s May 16 00:02:36.369229 kernel: raid6: avx2x2 gen() 17535 MB/s May 16 00:02:36.387403 kernel: raid6: avx2x1 gen() 13265 MB/s May 16 00:02:36.387473 kernel: raid6: using algorithm avx512x4 gen() 17692 MB/s May 16 00:02:36.406337 kernel: raid6: .... xor() 7445 MB/s, rmw enabled May 16 00:02:36.406402 kernel: raid6: using avx512x2 recovery algorithm May 16 00:02:36.428247 kernel: xor: automatically using best checksumming function avx May 16 00:02:36.593237 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:02:36.604117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:02:36.612419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:02:36.625499 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 16 00:02:36.630594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:02:36.638073 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:02:36.660565 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 16 00:02:36.691672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:02:36.697390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:02:36.749331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:02:36.759431 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:02:36.781527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:02:36.785614 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:02:36.787430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:02:36.787939 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:02:36.795501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:02:36.816546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:02:36.849257 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:02:36.858710 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 16 00:02:36.859007 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 16 00:02:36.863454 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 16 00:02:36.873394 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:3b:ec:04:56:29 May 16 00:02:36.890448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:02:36.891431 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:02:36.892582 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. May 16 00:02:36.893348 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:02:36.893863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:02:36.894072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:36.894927 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:02:36.905249 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:02:36.909402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:02:36.913259 kernel: AES CTR mode by8 optimization enabled May 16 00:02:36.919856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:02:36.920800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:36.924515 kernel: nvme nvme0: pci function 0000:00:04.0 May 16 00:02:36.924749 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 16 00:02:36.937523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:02:36.939281 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 16 00:02:36.949266 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:02:36.949349 kernel: GPT:9289727 != 16777215 May 16 00:02:36.949373 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:02:36.949394 kernel: GPT:9289727 != 16777215 May 16 00:02:36.949414 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:02:36.949435 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:02:36.966065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:36.977431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:02:36.997220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:02:37.062232 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (443) May 16 00:02:37.073563 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (458) May 16 00:02:37.126528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 16 00:02:37.137674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 16 00:02:37.149240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 16 00:02:37.155363 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 16 00:02:37.155926 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 16 00:02:37.163486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:02:37.170542 disk-uuid[632]: Primary Header is updated. May 16 00:02:37.170542 disk-uuid[632]: Secondary Entries is updated. May 16 00:02:37.170542 disk-uuid[632]: Secondary Header is updated. May 16 00:02:37.177231 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:02:38.192231 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:02:38.192841 disk-uuid[633]: The operation has completed successfully. May 16 00:02:38.328080 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:02:38.328221 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:02:38.345546 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:02:38.352378 sh[893]: Success May 16 00:02:38.374231 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 16 00:02:38.470917 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:02:38.490414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:02:38.494947 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:02:38.515517 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 16 00:02:38.515591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:02:38.517323 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:02:38.520011 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:02:38.520062 kernel: BTRFS info (device dm-0): using free space tree May 16 00:02:38.632242 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 16 00:02:38.635601 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:02:38.636691 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:02:38.641380 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:02:38.644359 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:02:38.667416 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:02:38.667478 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 16 00:02:38.667491 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:02:38.674230 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:02:38.682689 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:02:38.685281 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:02:38.691675 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:02:38.700558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:02:38.735748 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:02:38.742470 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:02:38.765224 systemd-networkd[1085]: lo: Link UP May 16 00:02:38.765234 systemd-networkd[1085]: lo: Gained carrier May 16 00:02:38.767053 systemd-networkd[1085]: Enumeration completed May 16 00:02:38.767585 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:02:38.767589 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:02:38.768681 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:02:38.770884 systemd[1]: Reached target network.target - Network. May 16 00:02:38.771031 systemd-networkd[1085]: eth0: Link UP May 16 00:02:38.771036 systemd-networkd[1085]: eth0: Gained carrier May 16 00:02:38.771049 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:02:38.781307 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.20.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 16 00:02:39.012221 ignition[1030]: Ignition 2.20.0 May 16 00:02:39.012236 ignition[1030]: Stage: fetch-offline May 16 00:02:39.012485 ignition[1030]: no configs at "/usr/lib/ignition/base.d" May 16 00:02:39.012497 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:39.014605 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:02:39.012834 ignition[1030]: Ignition finished successfully May 16 00:02:39.024513 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 00:02:39.041434 ignition[1093]: Ignition 2.20.0 May 16 00:02:39.041448 ignition[1093]: Stage: fetch May 16 00:02:39.041885 ignition[1093]: no configs at "/usr/lib/ignition/base.d" May 16 00:02:39.041899 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:39.042134 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:39.080351 ignition[1093]: PUT result: OK May 16 00:02:39.085188 ignition[1093]: parsed url from cmdline: "" May 16 00:02:39.085198 ignition[1093]: no config URL provided May 16 00:02:39.085224 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:02:39.085261 ignition[1093]: no config at "/usr/lib/ignition/user.ign" May 16 00:02:39.085284 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:39.087041 ignition[1093]: PUT result: OK May 16 00:02:39.087102 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 16 00:02:39.106586 ignition[1093]: GET result: OK May 16 00:02:39.106746 ignition[1093]: parsing config with SHA512: 1dac2faad587cb0749cd0c77997fa05726f0e67c89a71097cf5dce60c79330bb4eea9058fa5785cb1e6a45479cc89ea67ff0ab494243167b62272a8a9053fd08 May 16 00:02:39.111985 unknown[1093]: fetched base config from "system" May 16 00:02:39.111999 unknown[1093]: fetched base config from "system" May 16 00:02:39.112622 ignition[1093]: fetch: fetch complete May 16 00:02:39.112006 unknown[1093]: fetched user config from "aws" May 16 00:02:39.112630 ignition[1093]: fetch: fetch passed May 16 00:02:39.112697 ignition[1093]: Ignition finished successfully May 16 00:02:39.115581 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 00:02:39.119415 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:02:39.146891 ignition[1100]: Ignition 2.20.0 May 16 00:02:39.146905 ignition[1100]: Stage: kargs May 16 00:02:39.147370 ignition[1100]: no configs at "/usr/lib/ignition/base.d" May 16 00:02:39.147385 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:39.147519 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:39.148593 ignition[1100]: PUT result: OK May 16 00:02:39.152480 ignition[1100]: kargs: kargs passed May 16 00:02:39.152541 ignition[1100]: Ignition finished successfully May 16 00:02:39.153560 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:02:39.158481 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:02:39.173273 ignition[1106]: Ignition 2.20.0 May 16 00:02:39.173287 ignition[1106]: Stage: disks May 16 00:02:39.173720 ignition[1106]: no configs at "/usr/lib/ignition/base.d" May 16 00:02:39.173733 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:39.173855 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:39.175337 ignition[1106]: PUT result: OK May 16 00:02:39.179221 ignition[1106]: disks: disks passed May 16 00:02:39.179730 ignition[1106]: Ignition finished successfully May 16 00:02:39.181382 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:02:39.182226 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:02:39.182613 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:02:39.183189 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:02:39.183782 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:02:39.184384 systemd[1]: Reached target basic.target - Basic System. May 16 00:02:39.196526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:02:39.227967 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:02:39.231628 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:02:39.237434 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:02:39.344247 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 16 00:02:39.344346 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:02:39.345285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:02:39.360358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:02:39.363239 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:02:39.365154 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:02:39.365765 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:02:39.365809 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:02:39.382235 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1133) May 16 00:02:39.382929 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:02:39.388874 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:02:39.388913 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 16 00:02:39.388926 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:02:39.393444 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:02:39.395396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:02:39.397773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:02:39.656169 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:02:39.661298 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory May 16 00:02:39.665831 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:02:39.669743 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:02:39.907389 systemd-networkd[1085]: eth0: Gained IPv6LL May 16 00:02:39.941935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:02:39.945390 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:02:39.948404 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:02:39.960754 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:02:39.962678 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:02:39.990942 ignition[1245]: INFO : Ignition 2.20.0 May 16 00:02:39.990942 ignition[1245]: INFO : Stage: mount May 16 00:02:39.990942 ignition[1245]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:02:39.990942 ignition[1245]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:39.990942 ignition[1245]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:39.994271 ignition[1245]: INFO : PUT result: OK May 16 00:02:39.999431 ignition[1245]: INFO : mount: mount passed May 16 00:02:39.999838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:02:40.000707 ignition[1245]: INFO : Ignition finished successfully May 16 00:02:40.001688 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:02:40.007344 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:02:40.026455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:02:40.044224 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1258) May 16 00:02:40.044278 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:02:40.048012 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 16 00:02:40.048094 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:02:40.054233 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:02:40.056625 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:02:40.075777 ignition[1275]: INFO : Ignition 2.20.0 May 16 00:02:40.075777 ignition[1275]: INFO : Stage: files May 16 00:02:40.076884 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:02:40.076884 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:40.076884 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:40.077782 ignition[1275]: INFO : PUT result: OK May 16 00:02:40.080056 ignition[1275]: DEBUG : files: compiled without relabeling support, skipping May 16 00:02:40.080921 ignition[1275]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:02:40.080921 ignition[1275]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:02:40.096449 ignition[1275]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:02:40.097326 ignition[1275]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:02:40.097326 ignition[1275]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:02:40.096884 unknown[1275]: wrote ssh authorized keys file for user: core May 16 00:02:40.099752 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:02:40.100392 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 00:02:40.186377 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:02:40.406484 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:02:40.407605 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:02:40.407605 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 00:02:40.526829 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:02:41.067215 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:02:41.068336 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 00:02:41.615171 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:02:42.114563 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:02:42.114563 ignition[1275]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:02:42.117733 ignition[1275]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:02:42.119334 ignition[1275]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:02:42.119334 ignition[1275]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:02:42.119334 ignition[1275]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 16 00:02:42.119334 ignition[1275]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:02:42.119334 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:02:42.119334 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:02:42.119334 ignition[1275]: INFO : files: files passed May 16 00:02:42.119334 ignition[1275]: INFO : Ignition finished successfully May 16 00:02:42.120950 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:02:42.128575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:02:42.132426 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:02:42.147593 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:02:42.147749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:02:42.158293 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:02:42.158293 initrd-setup-root-after-ignition[1303]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:02:42.162342 initrd-setup-root-after-ignition[1307]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:02:42.162631 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:02:42.164517 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:02:42.176500 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:02:42.203278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:02:42.203444 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:02:42.204658 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:02:42.205793 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:02:42.206822 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:02:42.214437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:02:42.227900 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:02:42.234456 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:02:42.244934 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:02:42.245922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:02:42.247118 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:02:42.248006 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:02:42.248253 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:02:42.249399 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:02:42.250450 systemd[1]: Stopped target basic.target - Basic System. May 16 00:02:42.251258 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:02:42.252040 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:02:42.252820 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:02:42.253633 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:02:42.254479 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:02:42.255250 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:02:42.256387 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:02:42.257138 systemd[1]: Stopped target swap.target - Swaps. May 16 00:02:42.257855 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:02:42.258133 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:02:42.259302 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:02:42.260094 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:02:42.260786 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:02:42.261523 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:02:42.261982 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:02:42.262313 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:02:42.263757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:02:42.263945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:02:42.264663 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:02:42.264820 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:02:42.271521 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:02:42.276582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:02:42.277918 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:02:42.279020 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:02:42.279795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:02:42.279968 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:02:42.289930 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:02:42.290804 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:02:42.292837 ignition[1327]: INFO : Ignition 2.20.0 May 16 00:02:42.292837 ignition[1327]: INFO : Stage: umount May 16 00:02:42.292837 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:02:42.292837 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 16 00:02:42.292837 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 16 00:02:42.297122 ignition[1327]: INFO : PUT result: OK May 16 00:02:42.298107 ignition[1327]: INFO : umount: umount passed May 16 00:02:42.298107 ignition[1327]: INFO : Ignition finished successfully May 16 00:02:42.300498 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:02:42.300640 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:02:42.301802 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:02:42.301914 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:02:42.305061 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:02:42.305139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:02:42.306121 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 00:02:42.306191 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 00:02:42.307828 systemd[1]: Stopped target network.target - Network. May 16 00:02:42.308400 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:02:42.308468 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:02:42.309198 systemd[1]: Stopped target paths.target - Path Units. May 16 00:02:42.310200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:02:42.312267 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:02:42.312728 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:02:42.314042 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:02:42.314563 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:02:42.314619 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:02:42.316098 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:02:42.316147 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:02:42.317103 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:02:42.317166 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:02:42.318186 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:02:42.318260 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:02:42.319174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:02:42.319777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:02:42.324335 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:02:42.329241 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:02:42.329355 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:02:42.329405 systemd-networkd[1085]: eth0: DHCPv6 lease lost May 16 00:02:42.332309 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:02:42.332439 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:02:42.333591 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:02:42.333649 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:02:42.337329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:02:42.337715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:02:42.337775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:02:42.338300 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:02:42.338342 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:02:42.338694 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:02:42.338742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:02:42.339054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:02:42.339090 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:02:42.339610 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:02:42.348613 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:02:42.348751 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:02:42.351443 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:02:42.351516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:02:42.351931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:02:42.351968 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:02:42.352348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:02:42.352390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:02:42.352925 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:02:42.352964 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:02:42.353684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:02:42.353732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:02:42.357580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:02:42.359183 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:02:42.359280 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:02:42.360301 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 00:02:42.360346 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:02:42.361957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:02:42.362116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:02:42.363104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:02:42.363151 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:42.365514 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:02:42.365615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:02:42.372847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:02:42.372975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:02:42.445157 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:02:42.445282 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:02:42.446487 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:02:42.446974 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:02:42.447028 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:02:42.452420 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:02:42.460298 systemd[1]: Switching root. May 16 00:02:42.484937 systemd-journald[179]: Journal stopped May 16 00:02:44.236875 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 16 00:02:44.236958 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:02:44.238278 kernel: SELinux: policy capability open_perms=1 May 16 00:02:44.238303 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:02:44.238321 kernel: SELinux: policy capability always_check_network=0 May 16 00:02:44.238339 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:02:44.238374 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:02:44.238394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:02:44.238413 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:02:44.238439 kernel: audit: type=1403 audit(1747353762.974:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:02:44.238461 systemd[1]: Successfully loaded SELinux policy in 60.222ms. May 16 00:02:44.238494 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.712ms. May 16 00:02:44.238518 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:02:44.238539 systemd[1]: Detected virtualization amazon. May 16 00:02:44.238560 systemd[1]: Detected architecture x86-64. May 16 00:02:44.238580 systemd[1]: Detected first boot. May 16 00:02:44.238602 systemd[1]: Initializing machine ID from VM UUID. May 16 00:02:44.238627 zram_generator::config[1370]: No configuration found. May 16 00:02:44.238649 systemd[1]: Populated /etc with preset unit settings. May 16 00:02:44.238677 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:02:44.238699 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:02:44.238723 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:02:44.238746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:02:44.238768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:02:44.238789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:02:44.238813 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:02:44.238833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:02:44.238854 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:02:44.238874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:02:44.238895 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:02:44.238914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:02:44.238932 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:02:44.238951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:02:44.238971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:02:44.238995 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:02:44.239015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:02:44.239034 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:02:44.239059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:02:44.239083 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:02:44.239102 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:02:44.239122 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:02:44.239141 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:02:44.239165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:02:44.239184 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:02:44.240286 systemd[1]: Reached target slices.target - Slice Units. May 16 00:02:44.240326 systemd[1]: Reached target swap.target - Swaps. May 16 00:02:44.240349 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:02:44.240371 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:02:44.240391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:02:44.240413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:02:44.240434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:02:44.240461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:02:44.240482 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:02:44.240504 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:02:44.240525 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:02:44.240547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:02:44.240569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:02:44.240590 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:02:44.240611 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:02:44.240635 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:02:44.240661 systemd[1]: Reached target machines.target - Containers. May 16 00:02:44.240682 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:02:44.240703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:02:44.240724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:02:44.240745 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:02:44.240767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:02:44.240788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:02:44.240809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:02:44.240832 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:02:44.240851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:02:44.240870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:02:44.240888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:02:44.240907 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:02:44.240925 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:02:44.240944 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:02:44.240964 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:02:44.240983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:02:44.241006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:02:44.241025 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:02:44.241045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:02:44.241064 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:02:44.241082 systemd[1]: Stopped verity-setup.service. May 16 00:02:44.241103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:02:44.241123 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:02:44.241142 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:02:44.241164 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:02:44.241184 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:02:44.241217 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:02:44.241237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:02:44.242249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:02:44.242286 kernel: fuse: init (API version 7.39) May 16 00:02:44.242307 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:02:44.242327 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:02:44.242346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:02:44.242365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:02:44.242419 systemd-journald[1449]: Collecting audit messages is disabled. May 16 00:02:44.242462 kernel: loop: module loaded May 16 00:02:44.242481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:02:44.242501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:02:44.242520 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:02:44.242540 systemd-journald[1449]: Journal started May 16 00:02:44.242576 systemd-journald[1449]: Runtime Journal (/run/log/journal/ec2b6a6064aceb75a7b87993655d68bc) is 4.7M, max 38.2M, 33.4M free. May 16 00:02:43.865295 systemd[1]: Queued start job for default target multi-user.target. May 16 00:02:43.920727 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 16 00:02:43.921133 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:02:44.247267 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:02:44.251233 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:02:44.251654 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:02:44.253277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:02:44.254453 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:02:44.255592 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:02:44.257752 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:02:44.272927 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:02:44.281323 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:02:44.290455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:02:44.292352 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:02:44.292407 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:02:44.298714 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 00:02:44.315428 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:02:44.330482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:02:44.331633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:02:44.370260 kernel: ACPI: bus type drm_connector registered May 16 00:02:44.374412 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:02:44.381477 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:02:44.383598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:02:44.392883 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:02:44.393726 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:02:44.397362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:02:44.405433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:02:44.413192 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:02:44.419600 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:02:44.421008 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:02:44.422274 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:02:44.427028 systemd-journald[1449]: Time spent on flushing to /var/log/journal/ec2b6a6064aceb75a7b87993655d68bc is 107.819ms for 996 entries. May 16 00:02:44.427028 systemd-journald[1449]: System Journal (/var/log/journal/ec2b6a6064aceb75a7b87993655d68bc) is 8.0M, max 195.6M, 187.6M free. May 16 00:02:44.558497 systemd-journald[1449]: Received client request to flush runtime journal. May 16 00:02:44.558572 kernel: loop0: detected capacity change from 0 to 224512 May 16 00:02:44.558599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:02:44.428844 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:02:44.430474 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:02:44.432498 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:02:44.441194 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:02:44.443500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:02:44.447115 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:02:44.467236 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 00:02:44.479258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:02:44.496076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:02:44.523671 systemd-tmpfiles[1498]: ACLs are not supported, ignoring. May 16 00:02:44.523694 systemd-tmpfiles[1498]: ACLs are not supported, ignoring. May 16 00:02:44.538943 udevadm[1510]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:02:44.545022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:02:44.552519 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:02:44.561504 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:02:44.594552 kernel: loop1: detected capacity change from 0 to 138184 May 16 00:02:44.599872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:02:44.601590 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 00:02:44.625131 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:02:44.630447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:02:44.649480 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. May 16 00:02:44.649502 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. May 16 00:02:44.654490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:02:44.701247 kernel: loop2: detected capacity change from 0 to 62848 May 16 00:02:44.757229 kernel: loop3: detected capacity change from 0 to 140992 May 16 00:02:44.867237 kernel: loop4: detected capacity change from 0 to 224512 May 16 00:02:44.897233 kernel: loop5: detected capacity change from 0 to 138184 May 16 00:02:44.930264 kernel: loop6: detected capacity change from 0 to 62848 May 16 00:02:44.949241 kernel: loop7: detected capacity change from 0 to 140992 May 16 00:02:44.968147 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 16 00:02:44.970846 (sd-merge)[1528]: Merged extensions into '/usr'. May 16 00:02:44.974614 systemd[1]: Reloading requested from client PID 1497 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:02:44.974630 systemd[1]: Reloading... May 16 00:02:45.045276 zram_generator::config[1551]: No configuration found. May 16 00:02:45.288885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:02:45.368968 systemd[1]: Reloading finished in 393 ms. May 16 00:02:45.400507 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:02:45.411419 systemd[1]: Starting ensure-sysext.service... May 16 00:02:45.418026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:02:45.421486 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:02:45.435588 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:02:45.437684 systemd[1]: Reloading requested from client PID 1605 ('systemctl') (unit ensure-sysext.service)... May 16 00:02:45.437708 systemd[1]: Reloading... May 16 00:02:45.482925 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:02:45.484146 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:02:45.485616 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:02:45.486159 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 16 00:02:45.486398 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 16 00:02:45.495349 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:02:45.496432 systemd-tmpfiles[1606]: Skipping /boot May 16 00:02:45.519850 systemd-udevd[1608]: Using default interface naming scheme 'v255'. May 16 00:02:45.533966 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:02:45.533983 systemd-tmpfiles[1606]: Skipping /boot May 16 00:02:45.567270 zram_generator::config[1635]: No configuration found. May 16 00:02:45.700252 (udev-worker)[1643]: Network interface NamePolicy= disabled on kernel command line. May 16 00:02:45.791565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:02:45.797314 kernel: ACPI: button: Power Button [PWRF] May 16 00:02:45.800487 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 16 00:02:45.804292 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 16 00:02:45.822611 kernel: ACPI: button: Sleep Button [SLPF] May 16 00:02:45.901633 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1650) May 16 00:02:45.909292 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 16 00:02:45.940573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:02:46.035735 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:02:46.075958 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:02:46.077415 systemd[1]: Reloading finished in 639 ms. May 16 00:02:46.102183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:02:46.104649 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:02:46.107419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:02:46.111231 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:02:46.169880 systemd[1]: Finished ensure-sysext.service. May 16 00:02:46.170771 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:02:46.188153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 16 00:02:46.188851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:02:46.193426 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:02:46.196621 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:02:46.197516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:02:46.200397 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:02:46.206520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:02:46.208742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:02:46.213623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:02:46.226360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:02:46.228237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:02:46.232626 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:02:46.243867 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:02:46.250683 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:02:46.258513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:02:46.260408 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:02:46.263957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:02:46.265062 lvm[1801]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:02:46.268378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:02:46.269566 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:02:46.270994 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:02:46.272389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:02:46.274243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:02:46.276272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:02:46.280180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:02:46.289174 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:02:46.289501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:02:46.291768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:02:46.291968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:02:46.304980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:02:46.311491 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:02:46.335011 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:02:46.338510 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:02:46.347818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:02:46.349606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:02:46.360401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:02:46.393240 lvm[1835]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:02:46.400392 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:02:46.411495 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:02:46.424004 augenrules[1845]: No rules May 16 00:02:46.430409 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:02:46.431108 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:02:46.433758 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:02:46.437324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:02:46.440293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:02:46.442244 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:02:46.454032 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:02:46.524246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:02:46.552972 systemd-resolved[1815]: Positive Trust Anchors: May 16 00:02:46.552996 systemd-resolved[1815]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:02:46.553045 systemd-resolved[1815]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:02:46.555769 systemd-networkd[1814]: lo: Link UP May 16 00:02:46.555829 systemd-networkd[1814]: lo: Gained carrier May 16 00:02:46.557535 systemd-networkd[1814]: Enumeration completed May 16 00:02:46.557962 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:02:46.557967 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:02:46.559370 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:02:46.560588 systemd-resolved[1815]: Defaulting to hostname 'linux'. May 16 00:02:46.562368 systemd-networkd[1814]: eth0: Link UP May 16 00:02:46.562612 systemd-networkd[1814]: eth0: Gained carrier May 16 00:02:46.562643 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:02:46.566464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:02:46.567555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:02:46.569288 systemd[1]: Reached target network.target - Network. May 16 00:02:46.569966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:02:46.570555 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:02:46.571150 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:02:46.571593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:02:46.572126 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:02:46.572616 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:02:46.572998 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:02:46.573383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:02:46.573420 systemd[1]: Reached target paths.target - Path Units. May 16 00:02:46.573782 systemd[1]: Reached target timers.target - Timer Units. May 16 00:02:46.574335 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.20.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 16 00:02:46.576278 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:02:46.578496 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:02:46.585705 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:02:46.586951 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:02:46.587524 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:02:46.587941 systemd[1]: Reached target basic.target - Basic System. May 16 00:02:46.588394 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:02:46.588437 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:02:46.589602 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:02:46.594494 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 00:02:46.597025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:02:46.606347 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:02:46.609411 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:02:46.612297 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:02:46.615428 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:02:46.618105 jq[1870]: false May 16 00:02:46.619476 systemd[1]: Started ntpd.service - Network Time Service. May 16 00:02:46.623346 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:02:46.633322 systemd[1]: Starting setup-oem.service - Setup OEM... May 16 00:02:46.644429 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:02:46.649745 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:02:46.680419 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:02:46.683771 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:02:46.684481 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:02:46.687385 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:02:46.697812 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:02:46.703807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:02:46.704036 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:02:46.704166 jq[1882]: true May 16 00:02:46.724484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:02:46.724963 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:02:46.737030 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:02:46.756799 jq[1885]: true May 16 00:02:46.771131 dbus-daemon[1869]: [system] SELinux support is enabled May 16 00:02:46.771377 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:02:46.777324 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:02:46.777373 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:02:46.778025 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:02:46.778050 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: ntpd 4.2.8p17@1.4004-o Thu May 15 21:40:23 UTC 2025 (1): Starting May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: ---------------------------------------------------- May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: ntp-4 is maintained by Network Time Foundation, May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: corporation. Support and training for ntp-4 are May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: available at https://www.nwtime.org/support May 16 00:02:46.797405 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: ---------------------------------------------------- May 16 00:02:46.795099 ntpd[1873]: ntpd 4.2.8p17@1.4004-o Thu May 15 21:40:23 UTC 2025 (1): Starting May 16 00:02:46.795124 ntpd[1873]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 16 00:02:46.795136 ntpd[1873]: ---------------------------------------------------- May 16 00:02:46.795147 ntpd[1873]: ntp-4 is maintained by Network Time Foundation, May 16 00:02:46.795157 ntpd[1873]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 16 00:02:46.795167 ntpd[1873]: corporation. Support and training for ntp-4 are May 16 00:02:46.795177 ntpd[1873]: available at https://www.nwtime.org/support May 16 00:02:46.795186 ntpd[1873]: ---------------------------------------------------- May 16 00:02:46.803725 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:02:46.805890 dbus-daemon[1869]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 16 00:02:46.808799 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: proto: precision = 0.061 usec (-24) May 16 00:02:46.808542 ntpd[1873]: proto: precision = 0.061 usec (-24) May 16 00:02:46.808888 ntpd[1873]: basedate set to 2025-05-03 May 16 00:02:46.809066 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: basedate set to 2025-05-03 May 16 00:02:46.809066 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: gps base set to 2025-05-04 (week 2365) May 16 00:02:46.808906 ntpd[1873]: gps base set to 2025-05-04 (week 2365) May 16 00:02:46.809769 dbus-daemon[1869]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 00:02:46.823420 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 16 00:02:46.826367 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listen and drop on 0 v6wildcard [::]:123 May 16 00:02:46.825173 ntpd[1873]: Listen and drop on 0 v6wildcard [::]:123 May 16 00:02:46.829284 ntpd[1873]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 16 00:02:46.829697 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 16 00:02:46.829697 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listen normally on 2 lo 127.0.0.1:123 May 16 00:02:46.829511 ntpd[1873]: Listen normally on 2 lo 127.0.0.1:123 May 16 00:02:46.829549 ntpd[1873]: Listen normally on 3 eth0 172.31.20.206:123 May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listen normally on 3 eth0 172.31.20.206:123 May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listen normally on 4 lo [::1]:123 May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: bind(21) AF_INET6 fe80::43b:ecff:fe04:5629%2#123 flags 0x11 failed: Cannot assign requested address May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: unable to create socket on eth0 (5) for fe80::43b:ecff:fe04:5629%2#123 May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: failed to init interface for address fe80::43b:ecff:fe04:5629%2 May 16 00:02:46.834616 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: Listening on routing socket on fd #21 for interface updates May 16 00:02:46.830345 ntpd[1873]: Listen normally on 4 lo [::1]:123 May 16 00:02:46.830405 ntpd[1873]: bind(21) AF_INET6 fe80::43b:ecff:fe04:5629%2#123 flags 0x11 failed: Cannot assign requested address May 16 00:02:46.830427 ntpd[1873]: unable to create socket on eth0 (5) for fe80::43b:ecff:fe04:5629%2#123 May 16 00:02:46.830441 ntpd[1873]: failed to init interface for address fe80::43b:ecff:fe04:5629%2 May 16 00:02:46.830473 ntpd[1873]: Listening on routing socket on fd #21 for interface updates May 16 00:02:46.847234 tar[1897]: linux-amd64/LICENSE May 16 00:02:46.847234 tar[1897]: linux-amd64/helm May 16 00:02:46.853751 ntpd[1873]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 16 00:02:46.854564 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 16 00:02:46.854564 ntpd[1873]: 16 May 00:02:46 ntpd[1873]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 16 00:02:46.853805 ntpd[1873]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 16 00:02:46.868621 update_engine[1881]: I20250516 00:02:46.866272 1881 main.cc:92] Flatcar Update Engine starting May 16 00:02:46.867838 systemd[1]: Started update-engine.service - Update Engine. May 16 00:02:46.869038 update_engine[1881]: I20250516 00:02:46.868595 1881 update_check_scheduler.cc:74] Next update check in 10m32s May 16 00:02:46.878639 extend-filesystems[1871]: Found loop4 May 16 00:02:46.878639 extend-filesystems[1871]: Found loop5 May 16 00:02:46.878639 extend-filesystems[1871]: Found loop6 May 16 00:02:46.878639 extend-filesystems[1871]: Found loop7 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p1 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p2 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p3 May 16 00:02:46.878639 extend-filesystems[1871]: Found usr May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p4 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p6 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p7 May 16 00:02:46.878639 extend-filesystems[1871]: Found nvme0n1p9 May 16 00:02:46.878639 extend-filesystems[1871]: Checking size of /dev/nvme0n1p9 May 16 00:02:46.934342 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 16 00:02:46.876479 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:02:46.934531 extend-filesystems[1871]: Resized partition /dev/nvme0n1p9 May 16 00:02:46.877905 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:02:46.938468 extend-filesystems[1932]: resize2fs 1.47.1 (20-May-2024) May 16 00:02:46.878170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:02:46.921436 systemd[1]: Finished setup-oem.service - Setup OEM. May 16 00:02:47.007296 coreos-metadata[1868]: May 16 00:02:47.007 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 16 00:02:47.069839 systemd-logind[1878]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.008 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.011 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.011 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.012 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.014 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.014 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.016 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.016 INFO Fetch failed with 404: resource not found May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.017 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.017 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.021 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.022 INFO Fetch successful May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.022 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 16 00:02:47.073699 coreos-metadata[1868]: May 16 00:02:47.022 INFO Fetch successful May 16 00:02:47.069862 systemd-logind[1878]: Watching system buttons on /dev/input/event2 (Sleep Button) May 16 00:02:47.069886 systemd-logind[1878]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:02:47.070930 systemd-logind[1878]: New seat seat0. May 16 00:02:47.084825 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1643) May 16 00:02:47.075691 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:02:47.100344 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 00:02:47.101663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:02:47.104283 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 16 00:02:47.117499 extend-filesystems[1932]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 16 00:02:47.117499 extend-filesystems[1932]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:02:47.117499 extend-filesystems[1932]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 16 00:02:47.134850 extend-filesystems[1871]: Resized filesystem in /dev/nvme0n1p9 May 16 00:02:47.141368 bash[1936]: Updated "/home/core/.ssh/authorized_keys" May 16 00:02:47.121240 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:02:47.121500 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:02:47.135289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:02:47.161411 systemd[1]: Starting sshkeys.service... May 16 00:02:47.243404 dbus-daemon[1869]: [system] Successfully activated service 'org.freedesktop.hostname1' May 16 00:02:47.243598 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 16 00:02:47.245282 dbus-daemon[1869]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1912 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 16 00:02:47.255521 systemd[1]: Starting polkit.service - Authorization Manager... May 16 00:02:47.274574 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 00:02:47.284402 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 00:02:47.318948 polkitd[1987]: Started polkitd version 121 May 16 00:02:47.370434 polkitd[1987]: Loading rules from directory /etc/polkit-1/rules.d May 16 00:02:47.370534 polkitd[1987]: Loading rules from directory /usr/share/polkit-1/rules.d May 16 00:02:47.377583 polkitd[1987]: Finished loading, compiling and executing 2 rules May 16 00:02:47.382892 dbus-daemon[1869]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 16 00:02:47.385338 systemd[1]: Started polkit.service - Authorization Manager. May 16 00:02:47.387796 polkitd[1987]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 16 00:02:47.442377 coreos-metadata[1997]: May 16 00:02:47.441 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 16 00:02:47.445503 systemd-resolved[1815]: System hostname changed to 'ip-172-31-20-206'. May 16 00:02:47.445590 systemd-hostnamed[1912]: Hostname set to (transient) May 16 00:02:47.447744 coreos-metadata[1997]: May 16 00:02:47.447 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 16 00:02:47.449757 coreos-metadata[1997]: May 16 00:02:47.449 INFO Fetch successful May 16 00:02:47.449757 coreos-metadata[1997]: May 16 00:02:47.449 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 16 00:02:47.450124 coreos-metadata[1997]: May 16 00:02:47.450 INFO Fetch successful May 16 00:02:47.450603 locksmithd[1919]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:02:47.453263 unknown[1997]: wrote ssh authorized keys file for user: core May 16 00:02:47.562736 update-ssh-keys[2050]: Updated "/home/core/.ssh/authorized_keys" May 16 00:02:47.568935 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 00:02:47.571490 systemd[1]: Finished sshkeys.service. May 16 00:02:47.712273 containerd[1903]: time="2025-05-16T00:02:47.711526823Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:02:47.797487 ntpd[1873]: bind(24) AF_INET6 fe80::43b:ecff:fe04:5629%2#123 flags 0x11 failed: Cannot assign requested address May 16 00:02:47.797973 ntpd[1873]: 16 May 00:02:47 ntpd[1873]: bind(24) AF_INET6 fe80::43b:ecff:fe04:5629%2#123 flags 0x11 failed: Cannot assign requested address May 16 00:02:47.797973 ntpd[1873]: 16 May 00:02:47 ntpd[1873]: unable to create socket on eth0 (6) for fe80::43b:ecff:fe04:5629%2#123 May 16 00:02:47.797973 ntpd[1873]: 16 May 00:02:47 ntpd[1873]: failed to init interface for address fe80::43b:ecff:fe04:5629%2 May 16 00:02:47.797535 ntpd[1873]: unable to create socket on eth0 (6) for fe80::43b:ecff:fe04:5629%2#123 May 16 00:02:47.797551 ntpd[1873]: failed to init interface for address fe80::43b:ecff:fe04:5629%2 May 16 00:02:47.808499 containerd[1903]: time="2025-05-16T00:02:47.808413721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813526875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813602151Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813626120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813808731Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813830169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813901721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.813918321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.814144064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.814162630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.814180852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:02:47.814227 containerd[1903]: time="2025-05-16T00:02:47.814194644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814687 containerd[1903]: time="2025-05-16T00:02:47.814318741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814687 containerd[1903]: time="2025-05-16T00:02:47.814570091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:02:47.814762 containerd[1903]: time="2025-05-16T00:02:47.814747040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:02:47.814802 containerd[1903]: time="2025-05-16T00:02:47.814768356Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:02:47.815233 containerd[1903]: time="2025-05-16T00:02:47.814862874Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:02:47.815233 containerd[1903]: time="2025-05-16T00:02:47.814925557Z" level=info msg="metadata content store policy set" policy=shared May 16 00:02:47.823522 containerd[1903]: time="2025-05-16T00:02:47.823473565Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:02:47.823643 containerd[1903]: time="2025-05-16T00:02:47.823560880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:02:47.823643 containerd[1903]: time="2025-05-16T00:02:47.823585306Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:02:47.824233 containerd[1903]: time="2025-05-16T00:02:47.823767600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:02:47.824233 containerd[1903]: time="2025-05-16T00:02:47.823796600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:02:47.824233 containerd[1903]: time="2025-05-16T00:02:47.823978052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:02:47.825429 containerd[1903]: time="2025-05-16T00:02:47.825400551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:02:47.825592 containerd[1903]: time="2025-05-16T00:02:47.825572471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:02:47.825640 containerd[1903]: time="2025-05-16T00:02:47.825600810Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:02:47.825640 containerd[1903]: time="2025-05-16T00:02:47.825622713Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:02:47.825723 containerd[1903]: time="2025-05-16T00:02:47.825643233Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825723 containerd[1903]: time="2025-05-16T00:02:47.825664063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825723 containerd[1903]: time="2025-05-16T00:02:47.825682804Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825723 containerd[1903]: time="2025-05-16T00:02:47.825703003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825725422Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825744916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825765195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825785340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825812820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:02:47.825859 containerd[1903]: time="2025-05-16T00:02:47.825845387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825866078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825888041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825907018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825926444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825943625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825963500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.825983233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.826013877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.826032722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.826056517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:02:47.826083 containerd[1903]: time="2025-05-16T00:02:47.826075842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.826098634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.826129434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.826149575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.826165835Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827107613Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827142165Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827279659Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827298808Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827313348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827332006Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827346736Z" level=info msg="NRI interface is disabled by configuration." May 16 00:02:47.828548 containerd[1903]: time="2025-05-16T00:02:47.827362492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:02:47.829059 containerd[1903]: time="2025-05-16T00:02:47.827755403Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:02:47.829059 containerd[1903]: time="2025-05-16T00:02:47.827820300Z" level=info msg="Connect containerd service" May 16 00:02:47.829059 containerd[1903]: time="2025-05-16T00:02:47.827864767Z" level=info msg="using legacy CRI server" May 16 00:02:47.829059 containerd[1903]: time="2025-05-16T00:02:47.827873745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:02:47.829059 containerd[1903]: time="2025-05-16T00:02:47.828036459Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:02:47.830903 containerd[1903]: time="2025-05-16T00:02:47.830868225Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:02:47.831596 containerd[1903]: time="2025-05-16T00:02:47.831558532Z" level=info msg="Start subscribing containerd event" May 16 00:02:47.831650 containerd[1903]: time="2025-05-16T00:02:47.831621682Z" level=info msg="Start recovering state" May 16 00:02:47.831720 containerd[1903]: time="2025-05-16T00:02:47.831704985Z" level=info msg="Start event monitor" May 16 00:02:47.831760 containerd[1903]: time="2025-05-16T00:02:47.831724969Z" level=info msg="Start snapshots syncer" May 16 00:02:47.831760 containerd[1903]: time="2025-05-16T00:02:47.831738193Z" level=info msg="Start cni network conf syncer for default" May 16 00:02:47.831760 containerd[1903]: time="2025-05-16T00:02:47.831752111Z" level=info msg="Start streaming server" May 16 00:02:47.833226 containerd[1903]: time="2025-05-16T00:02:47.832753458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:02:47.833320 containerd[1903]: time="2025-05-16T00:02:47.833300936Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:02:47.833491 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:02:47.836298 containerd[1903]: time="2025-05-16T00:02:47.836271490Z" level=info msg="containerd successfully booted in 0.127984s" May 16 00:02:47.971456 systemd-networkd[1814]: eth0: Gained IPv6LL May 16 00:02:47.976879 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:02:47.978040 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:02:47.985595 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 16 00:02:47.995475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:02:47.997963 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:02:48.058058 sshd_keygen[1902]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:02:48.091077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:02:48.117052 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:02:48.117854 amazon-ssm-agent[2075]: Initializing new seelog logger May 16 00:02:48.120358 amazon-ssm-agent[2075]: New Seelog Logger Creation Complete May 16 00:02:48.120358 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.120358 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.120358 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 processing appconfig overrides May 16 00:02:48.123234 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.123234 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.123234 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 processing appconfig overrides May 16 00:02:48.123234 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.123234 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.123234 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 processing appconfig overrides May 16 00:02:48.125230 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO Proxy environment variables: May 16 00:02:48.128526 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:02:48.133632 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.133632 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 16 00:02:48.133632 amazon-ssm-agent[2075]: 2025/05/16 00:02:48 processing appconfig overrides May 16 00:02:48.138548 systemd[1]: Started sshd@0-172.31.20.206:22-139.178.89.65:41496.service - OpenSSH per-connection server daemon (139.178.89.65:41496). May 16 00:02:48.155338 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:02:48.155568 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:02:48.166792 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:02:48.193269 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:02:48.206778 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:02:48.218620 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:02:48.219639 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:02:48.230915 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO https_proxy: May 16 00:02:48.333002 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO http_proxy: May 16 00:02:48.336918 tar[1897]: linux-amd64/README.md May 16 00:02:48.352305 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:02:48.431908 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO no_proxy: May 16 00:02:48.437189 sshd[2099]: Accepted publickey for core from 139.178.89.65 port 41496 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:48.437270 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:48.447914 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:02:48.454586 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:02:48.460058 systemd-logind[1878]: New session 1 of user core. May 16 00:02:48.477643 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:02:48.490668 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:02:48.497631 (systemd)[2116]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:02:48.510239 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO Checking if agent identity type OnPrem can be assumed May 16 00:02:48.510239 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO Checking if agent identity type EC2 can be assumed May 16 00:02:48.510239 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO Agent will take identity from EC2 May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] using named pipe channel for IPC May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] using named pipe channel for IPC May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] using named pipe channel for IPC May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] Starting Core Agent May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [Registrar] Starting registrar module May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [EC2Identity] EC2 registration was successful. May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [CredentialRefresher] credentialRefresher has started May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [CredentialRefresher] Starting credentials refresher loop May 16 00:02:48.510893 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 16 00:02:48.531933 amazon-ssm-agent[2075]: 2025-05-16 00:02:48 INFO [CredentialRefresher] Next credential rotation will be in 31.149982055333332 minutes May 16 00:02:48.640540 systemd[2116]: Queued start job for default target default.target. May 16 00:02:48.650397 systemd[2116]: Created slice app.slice - User Application Slice. May 16 00:02:48.650432 systemd[2116]: Reached target paths.target - Paths. May 16 00:02:48.650447 systemd[2116]: Reached target timers.target - Timers. May 16 00:02:48.652467 systemd[2116]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:02:48.664530 systemd[2116]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:02:48.665176 systemd[2116]: Reached target sockets.target - Sockets. May 16 00:02:48.665199 systemd[2116]: Reached target basic.target - Basic System. May 16 00:02:48.665271 systemd[2116]: Reached target default.target - Main User Target. May 16 00:02:48.665302 systemd[2116]: Startup finished in 159ms. May 16 00:02:48.665452 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:02:48.674548 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:02:49.524864 amazon-ssm-agent[2075]: 2025-05-16 00:02:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 16 00:02:49.625395 amazon-ssm-agent[2075]: 2025-05-16 00:02:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2127) started May 16 00:02:49.631479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:02:49.635077 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:02:49.635987 systemd[1]: Startup finished in 605ms (kernel) + 7.262s (initrd) + 6.719s (userspace) = 14.586s. May 16 00:02:49.645683 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:02:49.726431 amazon-ssm-agent[2075]: 2025-05-16 00:02:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 16 00:02:50.468996 kubelet[2139]: E0516 00:02:50.468925 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:02:50.471342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:02:50.471489 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:02:50.471753 systemd[1]: kubelet.service: Consumed 1.067s CPU time. May 16 00:02:50.795598 ntpd[1873]: Listen normally on 7 eth0 [fe80::43b:ecff:fe04:5629%2]:123 May 16 00:02:50.795983 ntpd[1873]: 16 May 00:02:50 ntpd[1873]: Listen normally on 7 eth0 [fe80::43b:ecff:fe04:5629%2]:123 May 16 00:02:54.890376 systemd-resolved[1815]: Clock change detected. Flushing caches. May 16 00:02:54.919186 systemd[1]: Started sshd@1-172.31.20.206:22-139.178.89.65:57242.service - OpenSSH per-connection server daemon (139.178.89.65:57242). May 16 00:02:55.081065 sshd[2155]: Accepted publickey for core from 139.178.89.65 port 57242 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:55.082410 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:55.086965 systemd-logind[1878]: New session 2 of user core. May 16 00:02:55.095021 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:02:55.214617 sshd[2157]: Connection closed by 139.178.89.65 port 57242 May 16 00:02:55.215358 sshd-session[2155]: pam_unix(sshd:session): session closed for user core May 16 00:02:55.218954 systemd[1]: sshd@1-172.31.20.206:22-139.178.89.65:57242.service: Deactivated successfully. May 16 00:02:55.220670 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:02:55.221852 systemd-logind[1878]: Session 2 logged out. Waiting for processes to exit. May 16 00:02:55.223130 systemd-logind[1878]: Removed session 2. May 16 00:02:55.247549 systemd[1]: Started sshd@2-172.31.20.206:22-139.178.89.65:57244.service - OpenSSH per-connection server daemon (139.178.89.65:57244). May 16 00:02:55.411843 sshd[2162]: Accepted publickey for core from 139.178.89.65 port 57244 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:55.413351 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:55.418427 systemd-logind[1878]: New session 3 of user core. May 16 00:02:55.428013 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:02:55.541481 sshd[2164]: Connection closed by 139.178.89.65 port 57244 May 16 00:02:55.541981 sshd-session[2162]: pam_unix(sshd:session): session closed for user core May 16 00:02:55.545114 systemd[1]: sshd@2-172.31.20.206:22-139.178.89.65:57244.service: Deactivated successfully. May 16 00:02:55.546703 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:02:55.548279 systemd-logind[1878]: Session 3 logged out. Waiting for processes to exit. May 16 00:02:55.549239 systemd-logind[1878]: Removed session 3. May 16 00:02:55.579141 systemd[1]: Started sshd@3-172.31.20.206:22-139.178.89.65:57248.service - OpenSSH per-connection server daemon (139.178.89.65:57248). May 16 00:02:55.741692 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 57248 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:55.743144 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:55.747884 systemd-logind[1878]: New session 4 of user core. May 16 00:02:55.762985 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:02:55.881140 sshd[2171]: Connection closed by 139.178.89.65 port 57248 May 16 00:02:55.882160 sshd-session[2169]: pam_unix(sshd:session): session closed for user core May 16 00:02:55.885626 systemd[1]: sshd@3-172.31.20.206:22-139.178.89.65:57248.service: Deactivated successfully. May 16 00:02:55.887686 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:02:55.889305 systemd-logind[1878]: Session 4 logged out. Waiting for processes to exit. May 16 00:02:55.890491 systemd-logind[1878]: Removed session 4. May 16 00:02:55.912599 systemd[1]: Started sshd@4-172.31.20.206:22-139.178.89.65:57252.service - OpenSSH per-connection server daemon (139.178.89.65:57252). May 16 00:02:56.074489 sshd[2176]: Accepted publickey for core from 139.178.89.65 port 57252 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:56.075884 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:56.081228 systemd-logind[1878]: New session 5 of user core. May 16 00:02:56.091024 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:02:56.202580 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:02:56.203295 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:02:56.217318 sudo[2179]: pam_unix(sudo:session): session closed for user root May 16 00:02:56.239290 sshd[2178]: Connection closed by 139.178.89.65 port 57252 May 16 00:02:56.240072 sshd-session[2176]: pam_unix(sshd:session): session closed for user core May 16 00:02:56.243393 systemd[1]: sshd@4-172.31.20.206:22-139.178.89.65:57252.service: Deactivated successfully. May 16 00:02:56.245350 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:02:56.246844 systemd-logind[1878]: Session 5 logged out. Waiting for processes to exit. May 16 00:02:56.247772 systemd-logind[1878]: Removed session 5. May 16 00:02:56.279163 systemd[1]: Started sshd@5-172.31.20.206:22-139.178.89.65:57266.service - OpenSSH per-connection server daemon (139.178.89.65:57266). May 16 00:02:56.441702 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 57266 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:56.443329 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:56.448475 systemd-logind[1878]: New session 6 of user core. May 16 00:02:56.458015 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:02:56.555692 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:02:56.556016 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:02:56.559533 sudo[2188]: pam_unix(sudo:session): session closed for user root May 16 00:02:56.565030 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:02:56.565322 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:02:56.585326 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:02:56.616156 augenrules[2210]: No rules May 16 00:02:56.617694 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:02:56.617946 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:02:56.619271 sudo[2187]: pam_unix(sudo:session): session closed for user root May 16 00:02:56.641874 sshd[2186]: Connection closed by 139.178.89.65 port 57266 May 16 00:02:56.642420 sshd-session[2184]: pam_unix(sshd:session): session closed for user core May 16 00:02:56.646290 systemd[1]: sshd@5-172.31.20.206:22-139.178.89.65:57266.service: Deactivated successfully. May 16 00:02:56.647753 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:02:56.648405 systemd-logind[1878]: Session 6 logged out. Waiting for processes to exit. May 16 00:02:56.649674 systemd-logind[1878]: Removed session 6. May 16 00:02:56.674940 systemd[1]: Started sshd@6-172.31.20.206:22-139.178.89.65:59884.service - OpenSSH per-connection server daemon (139.178.89.65:59884). May 16 00:02:56.838118 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 59884 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:02:56.839116 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:02:56.844181 systemd-logind[1878]: New session 7 of user core. May 16 00:02:56.851004 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:02:56.948492 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:02:56.948844 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:02:57.319129 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:02:57.322070 (dockerd)[2239]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:02:57.675574 dockerd[2239]: time="2025-05-16T00:02:57.675413065Z" level=info msg="Starting up" May 16 00:02:57.769700 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2111130637-merged.mount: Deactivated successfully. May 16 00:02:57.938297 dockerd[2239]: time="2025-05-16T00:02:57.938159176Z" level=info msg="Loading containers: start." May 16 00:02:58.117796 kernel: Initializing XFRM netlink socket May 16 00:02:58.147417 (udev-worker)[2263]: Network interface NamePolicy= disabled on kernel command line. May 16 00:02:58.218541 systemd-networkd[1814]: docker0: Link UP May 16 00:02:58.243433 dockerd[2239]: time="2025-05-16T00:02:58.243384613Z" level=info msg="Loading containers: done." May 16 00:02:58.267116 dockerd[2239]: time="2025-05-16T00:02:58.267061087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:02:58.267288 dockerd[2239]: time="2025-05-16T00:02:58.267162703Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 00:02:58.267288 dockerd[2239]: time="2025-05-16T00:02:58.267274294Z" level=info msg="Daemon has completed initialization" May 16 00:02:58.304280 dockerd[2239]: time="2025-05-16T00:02:58.304234214Z" level=info msg="API listen on /run/docker.sock" May 16 00:02:58.304356 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:02:59.379200 containerd[1903]: time="2025-05-16T00:02:59.378835152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:03:00.089618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668521536.mount: Deactivated successfully. May 16 00:03:01.818831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:03:01.831068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:02.355051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:02.357553 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:03:02.435908 kubelet[2490]: E0516 00:03:02.435859 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:03:02.440739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:03:02.440950 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:03:03.192511 containerd[1903]: time="2025-05-16T00:03:03.192450260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:03.199006 containerd[1903]: time="2025-05-16T00:03:03.198872321Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 00:03:03.205790 containerd[1903]: time="2025-05-16T00:03:03.201864618Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:03.209635 containerd[1903]: time="2025-05-16T00:03:03.209586991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:03.211350 containerd[1903]: time="2025-05-16T00:03:03.211303319Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 3.832416149s" May 16 00:03:03.211550 containerd[1903]: time="2025-05-16T00:03:03.211528777Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 00:03:03.212970 containerd[1903]: time="2025-05-16T00:03:03.212943049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:03:05.711401 containerd[1903]: time="2025-05-16T00:03:05.711348016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:05.712419 containerd[1903]: time="2025-05-16T00:03:05.712293001Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 00:03:05.715283 containerd[1903]: time="2025-05-16T00:03:05.713684563Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:05.716787 containerd[1903]: time="2025-05-16T00:03:05.716344244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:05.717671 containerd[1903]: time="2025-05-16T00:03:05.717505692Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.504413386s" May 16 00:03:05.717671 containerd[1903]: time="2025-05-16T00:03:05.717537720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 00:03:05.718108 containerd[1903]: time="2025-05-16T00:03:05.718065463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:03:11.610055 containerd[1903]: time="2025-05-16T00:03:11.609940056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:11.611113 containerd[1903]: time="2025-05-16T00:03:11.611060809Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 00:03:11.613376 containerd[1903]: time="2025-05-16T00:03:11.613316071Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:11.617100 containerd[1903]: time="2025-05-16T00:03:11.615833851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:11.617100 containerd[1903]: time="2025-05-16T00:03:11.616944334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 5.898758729s" May 16 00:03:11.617100 containerd[1903]: time="2025-05-16T00:03:11.616971924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 00:03:11.618184 containerd[1903]: time="2025-05-16T00:03:11.617984463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:03:12.641964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:03:12.647963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:12.974062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:12.976305 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:03:13.066975 kubelet[2517]: E0516 00:03:13.066921 2517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:03:13.071263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:03:13.071461 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:03:13.220178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683973903.mount: Deactivated successfully. May 16 00:03:13.820744 containerd[1903]: time="2025-05-16T00:03:13.820671695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:13.822051 containerd[1903]: time="2025-05-16T00:03:13.822003140Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 00:03:13.824170 containerd[1903]: time="2025-05-16T00:03:13.823146769Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:13.826246 containerd[1903]: time="2025-05-16T00:03:13.825531456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:13.826246 containerd[1903]: time="2025-05-16T00:03:13.826125933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.208110645s" May 16 00:03:13.826246 containerd[1903]: time="2025-05-16T00:03:13.826154187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 00:03:13.826660 containerd[1903]: time="2025-05-16T00:03:13.826632316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:03:14.363335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941810762.mount: Deactivated successfully. May 16 00:03:15.508345 containerd[1903]: time="2025-05-16T00:03:15.508283045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.509792 containerd[1903]: time="2025-05-16T00:03:15.509576624Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:03:15.510746 containerd[1903]: time="2025-05-16T00:03:15.510684189Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.514685 containerd[1903]: time="2025-05-16T00:03:15.514619796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.516052 containerd[1903]: time="2025-05-16T00:03:15.515855751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.689189657s" May 16 00:03:15.516052 containerd[1903]: time="2025-05-16T00:03:15.515900022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:03:15.516966 containerd[1903]: time="2025-05-16T00:03:15.516931630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:03:15.969996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645280394.mount: Deactivated successfully. May 16 00:03:15.977415 containerd[1903]: time="2025-05-16T00:03:15.977343701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.978315 containerd[1903]: time="2025-05-16T00:03:15.978269241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:03:15.980799 containerd[1903]: time="2025-05-16T00:03:15.980112245Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.982521 containerd[1903]: time="2025-05-16T00:03:15.982474673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:15.983708 containerd[1903]: time="2025-05-16T00:03:15.983136751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.170424ms" May 16 00:03:15.983708 containerd[1903]: time="2025-05-16T00:03:15.983166945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:03:15.983996 containerd[1903]: time="2025-05-16T00:03:15.983973834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:03:16.573140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855691463.mount: Deactivated successfully. May 16 00:03:18.545092 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 16 00:03:19.947331 containerd[1903]: time="2025-05-16T00:03:19.947261722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:19.948501 containerd[1903]: time="2025-05-16T00:03:19.948435563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 00:03:19.949881 containerd[1903]: time="2025-05-16T00:03:19.949479232Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:19.952715 containerd[1903]: time="2025-05-16T00:03:19.952525713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:03:19.953754 containerd[1903]: time="2025-05-16T00:03:19.953722061Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.969718607s" May 16 00:03:19.953889 containerd[1903]: time="2025-05-16T00:03:19.953872577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 00:03:22.973059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:22.980166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:23.024891 systemd[1]: Reloading requested from client PID 2667 ('systemctl') (unit session-7.scope)... May 16 00:03:23.024910 systemd[1]: Reloading... May 16 00:03:23.161863 zram_generator::config[2710]: No configuration found. May 16 00:03:23.310954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:03:23.397984 systemd[1]: Reloading finished in 372 ms. May 16 00:03:23.447407 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:03:23.447510 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:03:23.447827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:23.450037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:23.646707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:23.660219 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:03:23.715158 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:03:23.715692 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:03:23.715692 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:03:23.716361 kubelet[2770]: I0516 00:03:23.716311 2770 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:03:24.004334 kubelet[2770]: I0516 00:03:24.003674 2770 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:03:24.004334 kubelet[2770]: I0516 00:03:24.003880 2770 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:03:24.004334 kubelet[2770]: I0516 00:03:24.004277 2770 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:03:24.055684 kubelet[2770]: I0516 00:03:24.054929 2770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:03:24.059079 kubelet[2770]: E0516 00:03:24.059033 2770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:24.078158 kubelet[2770]: E0516 00:03:24.078087 2770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:03:24.078158 kubelet[2770]: I0516 00:03:24.078146 2770 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:03:24.086851 kubelet[2770]: I0516 00:03:24.086819 2770 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:03:24.090505 kubelet[2770]: I0516 00:03:24.090440 2770 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:03:24.090738 kubelet[2770]: I0516 00:03:24.090503 2770 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:03:24.094294 kubelet[2770]: I0516 00:03:24.093504 2770 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:03:24.094294 kubelet[2770]: I0516 00:03:24.093540 2770 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:03:24.095336 kubelet[2770]: I0516 00:03:24.095245 2770 state_mem.go:36] "Initialized new in-memory state store" May 16 00:03:24.101884 kubelet[2770]: I0516 00:03:24.101494 2770 kubelet.go:446] "Attempting to sync node with API server" May 16 00:03:24.101884 kubelet[2770]: I0516 00:03:24.101566 2770 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:03:24.101884 kubelet[2770]: I0516 00:03:24.101592 2770 kubelet.go:352] "Adding apiserver pod source" May 16 00:03:24.101884 kubelet[2770]: I0516 00:03:24.101605 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:03:24.112792 kubelet[2770]: W0516 00:03:24.112423 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-206&limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:24.113132 kubelet[2770]: E0516 00:03:24.112936 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-206&limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:24.113691 kubelet[2770]: W0516 00:03:24.113637 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:24.113790 kubelet[2770]: E0516 00:03:24.113709 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:24.115557 kubelet[2770]: I0516 00:03:24.115500 2770 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:03:24.121648 kubelet[2770]: I0516 00:03:24.121205 2770 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:03:24.122620 kubelet[2770]: W0516 00:03:24.122589 2770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:03:24.123354 kubelet[2770]: I0516 00:03:24.123316 2770 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:03:24.123354 kubelet[2770]: I0516 00:03:24.123359 2770 server.go:1287] "Started kubelet" May 16 00:03:24.135969 kubelet[2770]: I0516 00:03:24.135745 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:03:24.137048 kubelet[2770]: I0516 00:03:24.136981 2770 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:03:24.137832 kubelet[2770]: I0516 00:03:24.137781 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:03:24.138109 kubelet[2770]: I0516 00:03:24.138088 2770 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:03:24.147173 kubelet[2770]: I0516 00:03:24.146597 2770 server.go:479] "Adding debug handlers to kubelet server" May 16 00:03:24.149277 kubelet[2770]: I0516 00:03:24.149237 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:03:24.151840 kubelet[2770]: I0516 00:03:24.151294 2770 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:03:24.151840 kubelet[2770]: E0516 00:03:24.151573 2770 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-206\" not found" May 16 00:03:24.152724 kubelet[2770]: I0516 00:03:24.152699 2770 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:03:24.152910 kubelet[2770]: I0516 00:03:24.152898 2770 reconciler.go:26] "Reconciler: start to sync state" May 16 00:03:24.153598 kubelet[2770]: E0516 00:03:24.153563 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": dial tcp 172.31.20.206:6443: connect: connection refused" interval="200ms" May 16 00:03:24.159864 kubelet[2770]: E0516 00:03:24.153798 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-206.183fd906e7b2ee09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-206,UID:ip-172-31-20-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-206,},FirstTimestamp:2025-05-16 00:03:24.123336201 +0000 UTC m=+0.459229281,LastTimestamp:2025-05-16 00:03:24.123336201 +0000 UTC m=+0.459229281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-206,}" May 16 00:03:24.162399 kubelet[2770]: I0516 00:03:24.162118 2770 factory.go:221] Registration of the systemd container factory successfully May 16 00:03:24.163917 kubelet[2770]: I0516 00:03:24.162932 2770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:03:24.163917 kubelet[2770]: W0516 00:03:24.163575 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:24.163917 kubelet[2770]: E0516 00:03:24.163672 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:24.174708 kubelet[2770]: I0516 00:03:24.174674 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:03:24.177621 kubelet[2770]: I0516 00:03:24.177580 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:03:24.177621 kubelet[2770]: I0516 00:03:24.177620 2770 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:03:24.178080 kubelet[2770]: I0516 00:03:24.177648 2770 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:03:24.178080 kubelet[2770]: I0516 00:03:24.177657 2770 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:03:24.178080 kubelet[2770]: E0516 00:03:24.177712 2770 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:03:24.182890 kubelet[2770]: W0516 00:03:24.182706 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:24.183027 kubelet[2770]: E0516 00:03:24.182948 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:24.183813 kubelet[2770]: I0516 00:03:24.183713 2770 factory.go:221] Registration of the containerd container factory successfully May 16 00:03:24.184879 kubelet[2770]: E0516 00:03:24.184442 2770 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:03:24.199423 kubelet[2770]: I0516 00:03:24.199400 2770 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:03:24.199576 kubelet[2770]: I0516 00:03:24.199562 2770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:03:24.199689 kubelet[2770]: I0516 00:03:24.199678 2770 state_mem.go:36] "Initialized new in-memory state store" May 16 00:03:24.203955 kubelet[2770]: I0516 00:03:24.203920 2770 policy_none.go:49] "None policy: Start" May 16 00:03:24.203955 kubelet[2770]: I0516 00:03:24.203950 2770 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:03:24.203955 kubelet[2770]: I0516 00:03:24.203963 2770 state_mem.go:35] "Initializing new in-memory state store" May 16 00:03:24.211885 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:03:24.229035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:03:24.233297 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:03:24.240718 kubelet[2770]: I0516 00:03:24.240120 2770 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:03:24.240718 kubelet[2770]: I0516 00:03:24.240289 2770 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:03:24.240718 kubelet[2770]: I0516 00:03:24.240301 2770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:03:24.240718 kubelet[2770]: I0516 00:03:24.240505 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:03:24.243694 kubelet[2770]: E0516 00:03:24.243662 2770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:03:24.243845 kubelet[2770]: E0516 00:03:24.243829 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-206\" not found" May 16 00:03:24.287372 systemd[1]: Created slice kubepods-burstable-pod94a7c5b5701fba5cb7e4be06e344d48c.slice - libcontainer container kubepods-burstable-pod94a7c5b5701fba5cb7e4be06e344d48c.slice. May 16 00:03:24.300998 kubelet[2770]: E0516 00:03:24.300948 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:24.305243 systemd[1]: Created slice kubepods-burstable-podf8e4413d1aaa8dea3e1b39b6b545c04c.slice - libcontainer container kubepods-burstable-podf8e4413d1aaa8dea3e1b39b6b545c04c.slice. May 16 00:03:24.309429 kubelet[2770]: E0516 00:03:24.308639 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:24.310603 systemd[1]: Created slice kubepods-burstable-poda6b7e65e85fc515b249a0bad3999125f.slice - libcontainer container kubepods-burstable-poda6b7e65e85fc515b249a0bad3999125f.slice. May 16 00:03:24.313030 kubelet[2770]: E0516 00:03:24.312992 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:24.342895 kubelet[2770]: I0516 00:03:24.342850 2770 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:24.343301 kubelet[2770]: E0516 00:03:24.343262 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.206:6443/api/v1/nodes\": dial tcp 172.31.20.206:6443: connect: connection refused" node="ip-172-31-20-206" May 16 00:03:24.354171 kubelet[2770]: I0516 00:03:24.353957 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:24.354171 kubelet[2770]: I0516 00:03:24.354006 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:24.354171 kubelet[2770]: I0516 00:03:24.354026 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:24.354171 kubelet[2770]: I0516 00:03:24.354042 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6b7e65e85fc515b249a0bad3999125f-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-206\" (UID: \"a6b7e65e85fc515b249a0bad3999125f\") " pod="kube-system/kube-scheduler-ip-172-31-20-206" May 16 00:03:24.354171 kubelet[2770]: I0516 00:03:24.354057 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-ca-certs\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:24.354449 kubelet[2770]: I0516 00:03:24.354072 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:24.354449 kubelet[2770]: I0516 00:03:24.354089 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:24.354449 kubelet[2770]: I0516 00:03:24.354106 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:24.354449 kubelet[2770]: I0516 00:03:24.354120 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:24.354449 kubelet[2770]: E0516 00:03:24.354244 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": dial tcp 172.31.20.206:6443: connect: connection refused" interval="400ms" May 16 00:03:24.501019 kubelet[2770]: E0516 00:03:24.500897 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-206.183fd906e7b2ee09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-206,UID:ip-172-31-20-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-206,},FirstTimestamp:2025-05-16 00:03:24.123336201 +0000 UTC m=+0.459229281,LastTimestamp:2025-05-16 00:03:24.123336201 +0000 UTC m=+0.459229281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-206,}" May 16 00:03:24.546228 kubelet[2770]: I0516 00:03:24.546107 2770 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:24.546964 kubelet[2770]: E0516 00:03:24.546929 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.206:6443/api/v1/nodes\": dial tcp 172.31.20.206:6443: connect: connection refused" node="ip-172-31-20-206" May 16 00:03:24.602521 containerd[1903]: time="2025-05-16T00:03:24.602459887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-206,Uid:94a7c5b5701fba5cb7e4be06e344d48c,Namespace:kube-system,Attempt:0,}" May 16 00:03:24.614775 containerd[1903]: time="2025-05-16T00:03:24.614714530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-206,Uid:f8e4413d1aaa8dea3e1b39b6b545c04c,Namespace:kube-system,Attempt:0,}" May 16 00:03:24.615564 containerd[1903]: time="2025-05-16T00:03:24.615527850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-206,Uid:a6b7e65e85fc515b249a0bad3999125f,Namespace:kube-system,Attempt:0,}" May 16 00:03:24.755578 kubelet[2770]: E0516 00:03:24.755537 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": dial tcp 172.31.20.206:6443: connect: connection refused" interval="800ms" May 16 00:03:24.949489 kubelet[2770]: I0516 00:03:24.949010 2770 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:24.949489 kubelet[2770]: E0516 00:03:24.949356 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.206:6443/api/v1/nodes\": dial tcp 172.31.20.206:6443: connect: connection refused" node="ip-172-31-20-206" May 16 00:03:25.065948 kubelet[2770]: W0516 00:03:25.065883 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:25.065948 kubelet[2770]: E0516 00:03:25.065952 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:25.095359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275137230.mount: Deactivated successfully. May 16 00:03:25.103335 containerd[1903]: time="2025-05-16T00:03:25.103277565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:03:25.105585 containerd[1903]: time="2025-05-16T00:03:25.105539545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:03:25.107196 containerd[1903]: time="2025-05-16T00:03:25.107108907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:03:25.108262 containerd[1903]: time="2025-05-16T00:03:25.108211049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 16 00:03:25.109758 containerd[1903]: time="2025-05-16T00:03:25.109711994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:03:25.112008 containerd[1903]: time="2025-05-16T00:03:25.111953530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:03:25.112440 containerd[1903]: time="2025-05-16T00:03:25.112350334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:03:25.115436 containerd[1903]: time="2025-05-16T00:03:25.115396156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:03:25.117374 containerd[1903]: time="2025-05-16T00:03:25.117334918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.49463ms" May 16 00:03:25.118514 containerd[1903]: time="2025-05-16T00:03:25.118470191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.913991ms" May 16 00:03:25.122174 containerd[1903]: time="2025-05-16T00:03:25.122130874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.509794ms" May 16 00:03:25.161514 kubelet[2770]: W0516 00:03:25.161428 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:25.161661 kubelet[2770]: E0516 00:03:25.161524 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:25.332392 kubelet[2770]: W0516 00:03:25.332206 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:25.332392 kubelet[2770]: E0516 00:03:25.332272 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:25.341172 containerd[1903]: time="2025-05-16T00:03:25.341095825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:25.342538 containerd[1903]: time="2025-05-16T00:03:25.342496668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:25.345273 containerd[1903]: time="2025-05-16T00:03:25.344435135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.345273 containerd[1903]: time="2025-05-16T00:03:25.344652896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.345993 containerd[1903]: time="2025-05-16T00:03:25.339348985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:25.345993 containerd[1903]: time="2025-05-16T00:03:25.345795372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:25.345993 containerd[1903]: time="2025-05-16T00:03:25.345816272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.345993 containerd[1903]: time="2025-05-16T00:03:25.345912119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.356379 containerd[1903]: time="2025-05-16T00:03:25.356079093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:25.356379 containerd[1903]: time="2025-05-16T00:03:25.356153017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:25.356379 containerd[1903]: time="2025-05-16T00:03:25.356176569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.356379 containerd[1903]: time="2025-05-16T00:03:25.356274429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:25.386993 systemd[1]: Started cri-containerd-b1424b662b495d2d26a80edb43dd8b0efbd8f382863fd9641da5a6a1e2503422.scope - libcontainer container b1424b662b495d2d26a80edb43dd8b0efbd8f382863fd9641da5a6a1e2503422. May 16 00:03:25.399207 systemd[1]: Started cri-containerd-18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf.scope - libcontainer container 18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf. May 16 00:03:25.408000 systemd[1]: Started cri-containerd-a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7.scope - libcontainer container a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7. May 16 00:03:25.504987 containerd[1903]: time="2025-05-16T00:03:25.504823077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-206,Uid:94a7c5b5701fba5cb7e4be06e344d48c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1424b662b495d2d26a80edb43dd8b0efbd8f382863fd9641da5a6a1e2503422\"" May 16 00:03:25.510982 containerd[1903]: time="2025-05-16T00:03:25.510940331Z" level=info msg="CreateContainer within sandbox \"b1424b662b495d2d26a80edb43dd8b0efbd8f382863fd9641da5a6a1e2503422\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:03:25.517006 containerd[1903]: time="2025-05-16T00:03:25.516966699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-206,Uid:a6b7e65e85fc515b249a0bad3999125f,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf\"" May 16 00:03:25.520220 containerd[1903]: time="2025-05-16T00:03:25.520180275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-206,Uid:f8e4413d1aaa8dea3e1b39b6b545c04c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7\"" May 16 00:03:25.524613 containerd[1903]: time="2025-05-16T00:03:25.524191465Z" level=info msg="CreateContainer within sandbox \"18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:03:25.526784 containerd[1903]: time="2025-05-16T00:03:25.526737750Z" level=info msg="CreateContainer within sandbox \"a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:03:25.557233 kubelet[2770]: E0516 00:03:25.557190 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": dial tcp 172.31.20.206:6443: connect: connection refused" interval="1.6s" May 16 00:03:25.572754 containerd[1903]: time="2025-05-16T00:03:25.572675750Z" level=info msg="CreateContainer within sandbox \"18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9\"" May 16 00:03:25.576058 containerd[1903]: time="2025-05-16T00:03:25.575322145Z" level=info msg="CreateContainer within sandbox \"b1424b662b495d2d26a80edb43dd8b0efbd8f382863fd9641da5a6a1e2503422\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"289b21157f0ab5bb034cde7131516888949b59df285ad27ce2ce8d4245659f44\"" May 16 00:03:25.576058 containerd[1903]: time="2025-05-16T00:03:25.575585518Z" level=info msg="StartContainer for \"e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9\"" May 16 00:03:25.578967 containerd[1903]: time="2025-05-16T00:03:25.578933750Z" level=info msg="CreateContainer within sandbox \"a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d\"" May 16 00:03:25.579246 containerd[1903]: time="2025-05-16T00:03:25.579228988Z" level=info msg="StartContainer for \"289b21157f0ab5bb034cde7131516888949b59df285ad27ce2ce8d4245659f44\"" May 16 00:03:25.588950 containerd[1903]: time="2025-05-16T00:03:25.588841143Z" level=info msg="StartContainer for \"498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d\"" May 16 00:03:25.611086 systemd[1]: Started cri-containerd-289b21157f0ab5bb034cde7131516888949b59df285ad27ce2ce8d4245659f44.scope - libcontainer container 289b21157f0ab5bb034cde7131516888949b59df285ad27ce2ce8d4245659f44. May 16 00:03:25.618988 systemd[1]: Started cri-containerd-e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9.scope - libcontainer container e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9. May 16 00:03:25.651446 kubelet[2770]: W0516 00:03:25.651300 2770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-206&limit=500&resourceVersion=0": dial tcp 172.31.20.206:6443: connect: connection refused May 16 00:03:25.651978 kubelet[2770]: E0516 00:03:25.651842 2770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-206&limit=500&resourceVersion=0\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:25.653022 systemd[1]: Started cri-containerd-498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d.scope - libcontainer container 498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d. May 16 00:03:25.727752 containerd[1903]: time="2025-05-16T00:03:25.726930073Z" level=info msg="StartContainer for \"289b21157f0ab5bb034cde7131516888949b59df285ad27ce2ce8d4245659f44\" returns successfully" May 16 00:03:25.729588 containerd[1903]: time="2025-05-16T00:03:25.729536552Z" level=info msg="StartContainer for \"e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9\" returns successfully" May 16 00:03:25.752281 kubelet[2770]: I0516 00:03:25.752249 2770 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:25.753477 kubelet[2770]: E0516 00:03:25.753432 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.206:6443/api/v1/nodes\": dial tcp 172.31.20.206:6443: connect: connection refused" node="ip-172-31-20-206" May 16 00:03:25.771161 containerd[1903]: time="2025-05-16T00:03:25.771036257Z" level=info msg="StartContainer for \"498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d\" returns successfully" May 16 00:03:26.134389 kubelet[2770]: E0516 00:03:26.134345 2770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.206:6443: connect: connection refused" logger="UnhandledError" May 16 00:03:26.217476 kubelet[2770]: E0516 00:03:26.217445 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:26.219696 kubelet[2770]: E0516 00:03:26.219662 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:26.222098 kubelet[2770]: E0516 00:03:26.222067 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:27.225525 kubelet[2770]: E0516 00:03:27.225489 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:27.226105 kubelet[2770]: E0516 00:03:27.226083 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:27.226481 kubelet[2770]: E0516 00:03:27.226461 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:27.356918 kubelet[2770]: I0516 00:03:27.356886 2770 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:28.784724 kubelet[2770]: E0516 00:03:28.784680 2770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:28.814684 kubelet[2770]: E0516 00:03:28.814609 2770 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-206\" not found" node="ip-172-31-20-206" May 16 00:03:28.890177 kubelet[2770]: I0516 00:03:28.890135 2770 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-206" May 16 00:03:28.953059 kubelet[2770]: I0516 00:03:28.952784 2770 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-206" May 16 00:03:28.958436 kubelet[2770]: E0516 00:03:28.958397 2770 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-206" May 16 00:03:28.958436 kubelet[2770]: I0516 00:03:28.958429 2770 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:28.961134 kubelet[2770]: E0516 00:03:28.960879 2770 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:28.961134 kubelet[2770]: I0516 00:03:28.960916 2770 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:28.964726 kubelet[2770]: E0516 00:03:28.964648 2770 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:29.117055 kubelet[2770]: I0516 00:03:29.116873 2770 apiserver.go:52] "Watching apiserver" May 16 00:03:29.153240 kubelet[2770]: I0516 00:03:29.153190 2770 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:03:30.054535 kubelet[2770]: I0516 00:03:30.054503 2770 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:30.630293 systemd[1]: Reloading requested from client PID 3041 ('systemctl') (unit session-7.scope)... May 16 00:03:30.630311 systemd[1]: Reloading... May 16 00:03:30.735795 zram_generator::config[3084]: No configuration found. May 16 00:03:30.877554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:03:30.980908 systemd[1]: Reloading finished in 350 ms. May 16 00:03:31.026914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:31.052214 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:03:31.052449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:31.058315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:03:31.295915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:03:31.309202 (kubelet)[3141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:03:31.362485 kubelet[3141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:03:31.362888 kubelet[3141]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:03:31.362932 kubelet[3141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:03:31.363110 kubelet[3141]: I0516 00:03:31.363086 3141 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:03:31.369486 kubelet[3141]: I0516 00:03:31.369454 3141 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:03:31.371784 kubelet[3141]: I0516 00:03:31.369616 3141 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:03:31.371784 kubelet[3141]: I0516 00:03:31.369896 3141 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:03:31.372720 kubelet[3141]: I0516 00:03:31.372684 3141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:03:31.375627 kubelet[3141]: I0516 00:03:31.375588 3141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:03:31.379471 kubelet[3141]: E0516 00:03:31.379418 3141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:03:31.379471 kubelet[3141]: I0516 00:03:31.379469 3141 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:03:31.381940 kubelet[3141]: I0516 00:03:31.381920 3141 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:03:31.383242 kubelet[3141]: I0516 00:03:31.383178 3141 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:03:31.383512 kubelet[3141]: I0516 00:03:31.383339 3141 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:03:31.383700 kubelet[3141]: I0516 00:03:31.383622 3141 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:03:31.383700 kubelet[3141]: I0516 00:03:31.383637 3141 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:03:31.388790 kubelet[3141]: I0516 00:03:31.388009 3141 state_mem.go:36] "Initialized new in-memory state store" May 16 00:03:31.388790 kubelet[3141]: I0516 00:03:31.388214 3141 kubelet.go:446] "Attempting to sync node with API server" May 16 00:03:31.388790 kubelet[3141]: I0516 00:03:31.388235 3141 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:03:31.388790 kubelet[3141]: I0516 00:03:31.388273 3141 kubelet.go:352] "Adding apiserver pod source" May 16 00:03:31.388790 kubelet[3141]: I0516 00:03:31.388282 3141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:03:31.392924 kubelet[3141]: I0516 00:03:31.392897 3141 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:03:31.395291 kubelet[3141]: I0516 00:03:31.395260 3141 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:03:31.397347 kubelet[3141]: I0516 00:03:31.395877 3141 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:03:31.397347 kubelet[3141]: I0516 00:03:31.395910 3141 server.go:1287] "Started kubelet" May 16 00:03:31.397347 kubelet[3141]: I0516 00:03:31.396121 3141 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:03:31.397347 kubelet[3141]: I0516 00:03:31.397000 3141 server.go:479] "Adding debug handlers to kubelet server" May 16 00:03:31.406146 kubelet[3141]: I0516 00:03:31.406077 3141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:03:31.406471 kubelet[3141]: I0516 00:03:31.406454 3141 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:03:31.419100 kubelet[3141]: I0516 00:03:31.418578 3141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:03:31.425987 kubelet[3141]: I0516 00:03:31.425962 3141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:03:31.429368 kubelet[3141]: I0516 00:03:31.429341 3141 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:03:31.429467 kubelet[3141]: I0516 00:03:31.429435 3141 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:03:31.429545 kubelet[3141]: I0516 00:03:31.429532 3141 reconciler.go:26] "Reconciler: start to sync state" May 16 00:03:31.431656 kubelet[3141]: I0516 00:03:31.431630 3141 factory.go:221] Registration of the systemd container factory successfully May 16 00:03:31.431796 kubelet[3141]: I0516 00:03:31.431710 3141 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:03:31.433231 kubelet[3141]: I0516 00:03:31.433209 3141 factory.go:221] Registration of the containerd container factory successfully May 16 00:03:31.437400 kubelet[3141]: I0516 00:03:31.437365 3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:03:31.439146 kubelet[3141]: I0516 00:03:31.439122 3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:03:31.439306 kubelet[3141]: I0516 00:03:31.439296 3141 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:03:31.439387 kubelet[3141]: I0516 00:03:31.439378 3141 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:03:31.439430 kubelet[3141]: I0516 00:03:31.439425 3141 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:03:31.439521 kubelet[3141]: E0516 00:03:31.439505 3141 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:03:31.496278 kubelet[3141]: I0516 00:03:31.496256 3141 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:03:31.496455 kubelet[3141]: I0516 00:03:31.496443 3141 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:03:31.496598 kubelet[3141]: I0516 00:03:31.496513 3141 state_mem.go:36] "Initialized new in-memory state store" May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496840 3141 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496854 3141 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496871 3141 policy_none.go:49] "None policy: Start" May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496881 3141 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496891 3141 state_mem.go:35] "Initializing new in-memory state store" May 16 00:03:31.497053 kubelet[3141]: I0516 00:03:31.496990 3141 state_mem.go:75] "Updated machine memory state" May 16 00:03:31.501427 kubelet[3141]: I0516 00:03:31.501408 3141 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:03:31.503433 kubelet[3141]: I0516 00:03:31.502078 3141 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:03:31.503433 kubelet[3141]: I0516 00:03:31.502092 3141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:03:31.505788 kubelet[3141]: I0516 00:03:31.505295 3141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:03:31.508269 kubelet[3141]: E0516 00:03:31.508249 3141 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:03:31.542245 kubelet[3141]: I0516 00:03:31.542208 3141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-206" May 16 00:03:31.542406 kubelet[3141]: I0516 00:03:31.542388 3141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.542726 kubelet[3141]: I0516 00:03:31.542645 3141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:31.550777 kubelet[3141]: E0516 00:03:31.550668 3141 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-206\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:31.609745 kubelet[3141]: I0516 00:03:31.609713 3141 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-206" May 16 00:03:31.621294 kubelet[3141]: I0516 00:03:31.621259 3141 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-206" May 16 00:03:31.621426 kubelet[3141]: I0516 00:03:31.621343 3141 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-206" May 16 00:03:31.631655 kubelet[3141]: I0516 00:03:31.630925 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.631655 kubelet[3141]: I0516 00:03:31.630957 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.631655 kubelet[3141]: I0516 00:03:31.630975 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-ca-certs\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:31.631655 kubelet[3141]: I0516 00:03:31.630991 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.631655 kubelet[3141]: I0516 00:03:31.631008 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.631902 kubelet[3141]: I0516 00:03:31.631027 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e4413d1aaa8dea3e1b39b6b545c04c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-206\" (UID: \"f8e4413d1aaa8dea3e1b39b6b545c04c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-206" May 16 00:03:31.631902 kubelet[3141]: I0516 00:03:31.631043 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6b7e65e85fc515b249a0bad3999125f-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-206\" (UID: \"a6b7e65e85fc515b249a0bad3999125f\") " pod="kube-system/kube-scheduler-ip-172-31-20-206" May 16 00:03:31.631902 kubelet[3141]: I0516 00:03:31.631057 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:31.631902 kubelet[3141]: I0516 00:03:31.631073 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94a7c5b5701fba5cb7e4be06e344d48c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-206\" (UID: \"94a7c5b5701fba5cb7e4be06e344d48c\") " pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:31.653089 sudo[3175]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:03:31.653416 sudo[3175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 00:03:32.346117 sudo[3175]: pam_unix(sudo:session): session closed for user root May 16 00:03:32.389709 kubelet[3141]: I0516 00:03:32.389220 3141 apiserver.go:52] "Watching apiserver" May 16 00:03:32.430334 kubelet[3141]: I0516 00:03:32.430256 3141 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:03:32.476055 kubelet[3141]: I0516 00:03:32.475894 3141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:32.494629 kubelet[3141]: E0516 00:03:32.494380 3141 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-206\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-206" May 16 00:03:32.526327 kubelet[3141]: I0516 00:03:32.525663 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-206" podStartSLOduration=1.525641461 podStartE2EDuration="1.525641461s" podCreationTimestamp="2025-05-16 00:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:03:32.524861129 +0000 UTC m=+1.209174342" watchObservedRunningTime="2025-05-16 00:03:32.525641461 +0000 UTC m=+1.209954673" May 16 00:03:32.526815 kubelet[3141]: I0516 00:03:32.526645 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-206" podStartSLOduration=1.526631181 podStartE2EDuration="1.526631181s" podCreationTimestamp="2025-05-16 00:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:03:32.511808384 +0000 UTC m=+1.196121596" watchObservedRunningTime="2025-05-16 00:03:32.526631181 +0000 UTC m=+1.210944394" May 16 00:03:32.556828 kubelet[3141]: I0516 00:03:32.556471 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-206" podStartSLOduration=2.556447215 podStartE2EDuration="2.556447215s" podCreationTimestamp="2025-05-16 00:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:03:32.542913974 +0000 UTC m=+1.227227186" watchObservedRunningTime="2025-05-16 00:03:32.556447215 +0000 UTC m=+1.240760420" May 16 00:03:33.144954 update_engine[1881]: I20250516 00:03:33.144866 1881 update_attempter.cc:509] Updating boot flags... May 16 00:03:33.207816 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3199) May 16 00:03:33.413931 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3189) May 16 00:03:34.518753 sudo[2221]: pam_unix(sudo:session): session closed for user root May 16 00:03:34.540817 sshd[2220]: Connection closed by 139.178.89.65 port 59884 May 16 00:03:34.541980 sshd-session[2218]: pam_unix(sshd:session): session closed for user core May 16 00:03:34.545819 systemd[1]: sshd@6-172.31.20.206:22-139.178.89.65:59884.service: Deactivated successfully. May 16 00:03:34.549264 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:03:34.549468 systemd[1]: session-7.scope: Consumed 5.281s CPU time, 135.8M memory peak, 0B memory swap peak. May 16 00:03:34.551399 systemd-logind[1878]: Session 7 logged out. Waiting for processes to exit. May 16 00:03:34.552951 systemd-logind[1878]: Removed session 7. May 16 00:03:36.727264 kubelet[3141]: I0516 00:03:36.727217 3141 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:03:36.728793 kubelet[3141]: I0516 00:03:36.728358 3141 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:03:36.728866 containerd[1903]: time="2025-05-16T00:03:36.728121518Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:03:37.500590 systemd[1]: Created slice kubepods-besteffort-pod72701fec_6289_4b21_aa5e_ac84cb326d13.slice - libcontainer container kubepods-besteffort-pod72701fec_6289_4b21_aa5e_ac84cb326d13.slice. May 16 00:03:37.524853 systemd[1]: Created slice kubepods-burstable-poda85a73a3_d462_43e9_be1d_814c23557f89.slice - libcontainer container kubepods-burstable-poda85a73a3_d462_43e9_be1d_814c23557f89.slice. May 16 00:03:37.574159 kubelet[3141]: I0516 00:03:37.574095 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-net\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574168 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85a73a3-d462-43e9-be1d-814c23557f89-clustermesh-secrets\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574194 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cni-path\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574216 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-kernel\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574239 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-hubble-tls\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574261 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72701fec-6289-4b21-aa5e-ac84cb326d13-kube-proxy\") pod \"kube-proxy-dm6p2\" (UID: \"72701fec-6289-4b21-aa5e-ac84cb326d13\") " pod="kube-system/kube-proxy-dm6p2" May 16 00:03:37.574323 kubelet[3141]: I0516 00:03:37.574281 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-etc-cni-netd\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574304 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-run\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574324 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72701fec-6289-4b21-aa5e-ac84cb326d13-lib-modules\") pod \"kube-proxy-dm6p2\" (UID: \"72701fec-6289-4b21-aa5e-ac84cb326d13\") " pod="kube-system/kube-proxy-dm6p2" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574347 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-hostproc\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574369 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-config-path\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574397 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72701fec-6289-4b21-aa5e-ac84cb326d13-xtables-lock\") pod \"kube-proxy-dm6p2\" (UID: \"72701fec-6289-4b21-aa5e-ac84cb326d13\") " pod="kube-system/kube-proxy-dm6p2" May 16 00:03:37.574596 kubelet[3141]: I0516 00:03:37.574451 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-bpf-maps\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574854 kubelet[3141]: I0516 00:03:37.574474 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-lib-modules\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574854 kubelet[3141]: I0516 00:03:37.574500 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d8b\" (UniqueName: \"kubernetes.io/projected/72701fec-6289-4b21-aa5e-ac84cb326d13-kube-api-access-p7d8b\") pod \"kube-proxy-dm6p2\" (UID: \"72701fec-6289-4b21-aa5e-ac84cb326d13\") " pod="kube-system/kube-proxy-dm6p2" May 16 00:03:37.574854 kubelet[3141]: I0516 00:03:37.574530 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-xtables-lock\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574854 kubelet[3141]: I0516 00:03:37.574554 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-cgroup\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.574854 kubelet[3141]: I0516 00:03:37.574581 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngn7f\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-kube-api-access-ngn7f\") pod \"cilium-cx4l7\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " pod="kube-system/cilium-cx4l7" May 16 00:03:37.757685 systemd[1]: Created slice kubepods-besteffort-pod5e5b6712_56de_427d_a641_012139968840.slice - libcontainer container kubepods-besteffort-pod5e5b6712_56de_427d_a641_012139968840.slice. May 16 00:03:37.776072 kubelet[3141]: I0516 00:03:37.776011 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k4zz\" (UniqueName: \"kubernetes.io/projected/5e5b6712-56de-427d-a641-012139968840-kube-api-access-4k4zz\") pod \"cilium-operator-6c4d7847fc-9b24c\" (UID: \"5e5b6712-56de-427d-a641-012139968840\") " pod="kube-system/cilium-operator-6c4d7847fc-9b24c" May 16 00:03:37.776548 kubelet[3141]: I0516 00:03:37.776085 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e5b6712-56de-427d-a641-012139968840-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9b24c\" (UID: \"5e5b6712-56de-427d-a641-012139968840\") " pod="kube-system/cilium-operator-6c4d7847fc-9b24c" May 16 00:03:37.817743 containerd[1903]: time="2025-05-16T00:03:37.817692722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm6p2,Uid:72701fec-6289-4b21-aa5e-ac84cb326d13,Namespace:kube-system,Attempt:0,}" May 16 00:03:37.834905 containerd[1903]: time="2025-05-16T00:03:37.834866426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cx4l7,Uid:a85a73a3-d462-43e9-be1d-814c23557f89,Namespace:kube-system,Attempt:0,}" May 16 00:03:37.857992 containerd[1903]: time="2025-05-16T00:03:37.857603059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:37.857992 containerd[1903]: time="2025-05-16T00:03:37.857684366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:37.857992 containerd[1903]: time="2025-05-16T00:03:37.857699037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:37.857992 containerd[1903]: time="2025-05-16T00:03:37.857839489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:37.894304 systemd[1]: Started cri-containerd-4a803c62189b7e2ab1ce8357dca62e4904a1f11badac072f3a6941c427682846.scope - libcontainer container 4a803c62189b7e2ab1ce8357dca62e4904a1f11badac072f3a6941c427682846. May 16 00:03:37.906711 containerd[1903]: time="2025-05-16T00:03:37.906405578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:37.906711 containerd[1903]: time="2025-05-16T00:03:37.906539241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:37.906711 containerd[1903]: time="2025-05-16T00:03:37.906572558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:37.907080 containerd[1903]: time="2025-05-16T00:03:37.906688885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:37.928007 systemd[1]: Started cri-containerd-f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b.scope - libcontainer container f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b. May 16 00:03:37.958451 containerd[1903]: time="2025-05-16T00:03:37.957522870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm6p2,Uid:72701fec-6289-4b21-aa5e-ac84cb326d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a803c62189b7e2ab1ce8357dca62e4904a1f11badac072f3a6941c427682846\"" May 16 00:03:37.962391 containerd[1903]: time="2025-05-16T00:03:37.962252274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cx4l7,Uid:a85a73a3-d462-43e9-be1d-814c23557f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\"" May 16 00:03:37.966082 containerd[1903]: time="2025-05-16T00:03:37.966028679Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:03:37.966734 containerd[1903]: time="2025-05-16T00:03:37.966354241Z" level=info msg="CreateContainer within sandbox \"4a803c62189b7e2ab1ce8357dca62e4904a1f11badac072f3a6941c427682846\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:03:38.020928 containerd[1903]: time="2025-05-16T00:03:38.020790760Z" level=info msg="CreateContainer within sandbox \"4a803c62189b7e2ab1ce8357dca62e4904a1f11badac072f3a6941c427682846\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a692cf3d0fa0f487b16ff81bba486fe9b2c91497cf8c549081d3edc48735b57f\"" May 16 00:03:38.022076 containerd[1903]: time="2025-05-16T00:03:38.022021062Z" level=info msg="StartContainer for \"a692cf3d0fa0f487b16ff81bba486fe9b2c91497cf8c549081d3edc48735b57f\"" May 16 00:03:38.055025 systemd[1]: Started cri-containerd-a692cf3d0fa0f487b16ff81bba486fe9b2c91497cf8c549081d3edc48735b57f.scope - libcontainer container a692cf3d0fa0f487b16ff81bba486fe9b2c91497cf8c549081d3edc48735b57f. May 16 00:03:38.071611 containerd[1903]: time="2025-05-16T00:03:38.071096322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9b24c,Uid:5e5b6712-56de-427d-a641-012139968840,Namespace:kube-system,Attempt:0,}" May 16 00:03:38.095628 containerd[1903]: time="2025-05-16T00:03:38.095583654Z" level=info msg="StartContainer for \"a692cf3d0fa0f487b16ff81bba486fe9b2c91497cf8c549081d3edc48735b57f\" returns successfully" May 16 00:03:38.111142 containerd[1903]: time="2025-05-16T00:03:38.110985398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:03:38.111389 containerd[1903]: time="2025-05-16T00:03:38.111132494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:03:38.111389 containerd[1903]: time="2025-05-16T00:03:38.111157772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:38.111389 containerd[1903]: time="2025-05-16T00:03:38.111300807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:03:38.134028 systemd[1]: Started cri-containerd-ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801.scope - libcontainer container ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801. May 16 00:03:38.186805 containerd[1903]: time="2025-05-16T00:03:38.186717818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9b24c,Uid:5e5b6712-56de-427d-a641-012139968840,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\"" May 16 00:03:38.503170 kubelet[3141]: I0516 00:03:38.503116 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dm6p2" podStartSLOduration=1.503099501 podStartE2EDuration="1.503099501s" podCreationTimestamp="2025-05-16 00:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:03:38.502957608 +0000 UTC m=+7.187270819" watchObservedRunningTime="2025-05-16 00:03:38.503099501 +0000 UTC m=+7.187412711" May 16 00:03:51.604620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029394381.mount: Deactivated successfully. May 16 00:04:08.581135 systemd[1]: Started sshd@7-172.31.20.206:22-139.178.89.65:36548.service - OpenSSH per-connection server daemon (139.178.89.65:36548). May 16 00:04:08.774566 sshd[3706]: Accepted publickey for core from 139.178.89.65 port 36548 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:08.776213 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:08.783082 systemd-logind[1878]: New session 8 of user core. May 16 00:04:08.787004 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:04:09.146792 sshd[3708]: Connection closed by 139.178.89.65 port 36548 May 16 00:04:09.148173 sshd-session[3706]: pam_unix(sshd:session): session closed for user core May 16 00:04:09.151173 systemd[1]: sshd@7-172.31.20.206:22-139.178.89.65:36548.service: Deactivated successfully. May 16 00:04:09.154128 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:04:09.155941 systemd-logind[1878]: Session 8 logged out. Waiting for processes to exit. May 16 00:04:09.157268 systemd-logind[1878]: Removed session 8. May 16 00:04:11.888331 containerd[1903]: time="2025-05-16T00:04:11.888281940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:04:11.890494 containerd[1903]: time="2025-05-16T00:04:11.890414317Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 00:04:11.893631 containerd[1903]: time="2025-05-16T00:04:11.892919240Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:04:11.894194 containerd[1903]: time="2025-05-16T00:04:11.894154842Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 33.928079803s" May 16 00:04:11.894270 containerd[1903]: time="2025-05-16T00:04:11.894199728Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 00:04:11.895409 containerd[1903]: time="2025-05-16T00:04:11.895373075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:04:11.897489 containerd[1903]: time="2025-05-16T00:04:11.897348712Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:04:11.967556 containerd[1903]: time="2025-05-16T00:04:11.967499069Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\"" May 16 00:04:11.969669 containerd[1903]: time="2025-05-16T00:04:11.968243339Z" level=info msg="StartContainer for \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\"" May 16 00:04:12.154027 systemd[1]: Started cri-containerd-788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399.scope - libcontainer container 788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399. May 16 00:04:12.208050 containerd[1903]: time="2025-05-16T00:04:12.207966253Z" level=info msg="StartContainer for \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\" returns successfully" May 16 00:04:12.220573 systemd[1]: cri-containerd-788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399.scope: Deactivated successfully. May 16 00:04:12.285691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399-rootfs.mount: Deactivated successfully. May 16 00:04:12.407976 containerd[1903]: time="2025-05-16T00:04:12.389513155Z" level=info msg="shim disconnected" id=788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399 namespace=k8s.io May 16 00:04:12.408223 containerd[1903]: time="2025-05-16T00:04:12.407978003Z" level=warning msg="cleaning up after shim disconnected" id=788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399 namespace=k8s.io May 16 00:04:12.408223 containerd[1903]: time="2025-05-16T00:04:12.408001728Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:04:12.630894 containerd[1903]: time="2025-05-16T00:04:12.630841541Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:04:12.652319 containerd[1903]: time="2025-05-16T00:04:12.652250884Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\"" May 16 00:04:12.653142 containerd[1903]: time="2025-05-16T00:04:12.653105572Z" level=info msg="StartContainer for \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\"" May 16 00:04:12.687012 systemd[1]: Started cri-containerd-ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e.scope - libcontainer container ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e. May 16 00:04:12.721929 containerd[1903]: time="2025-05-16T00:04:12.721811199Z" level=info msg="StartContainer for \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\" returns successfully" May 16 00:04:12.734856 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:04:12.735846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:04:12.735925 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 00:04:12.743076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:04:12.743286 systemd[1]: cri-containerd-ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e.scope: Deactivated successfully. May 16 00:04:12.778438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:04:12.788580 containerd[1903]: time="2025-05-16T00:04:12.788418052Z" level=info msg="shim disconnected" id=ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e namespace=k8s.io May 16 00:04:12.788954 containerd[1903]: time="2025-05-16T00:04:12.788639529Z" level=warning msg="cleaning up after shim disconnected" id=ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e namespace=k8s.io May 16 00:04:12.788954 containerd[1903]: time="2025-05-16T00:04:12.788659101Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:04:12.803512 containerd[1903]: time="2025-05-16T00:04:12.803448089Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:04:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 00:04:13.603373 containerd[1903]: time="2025-05-16T00:04:13.603195652Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:04:13.684439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706873281.mount: Deactivated successfully. May 16 00:04:13.692370 containerd[1903]: time="2025-05-16T00:04:13.692328952Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\"" May 16 00:04:13.694443 containerd[1903]: time="2025-05-16T00:04:13.693411944Z" level=info msg="StartContainer for \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\"" May 16 00:04:13.751535 systemd[1]: Started cri-containerd-8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0.scope - libcontainer container 8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0. May 16 00:04:13.799530 containerd[1903]: time="2025-05-16T00:04:13.799485269Z" level=info msg="StartContainer for \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\" returns successfully" May 16 00:04:13.810316 systemd[1]: cri-containerd-8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0.scope: Deactivated successfully. May 16 00:04:13.847083 containerd[1903]: time="2025-05-16T00:04:13.847018923Z" level=info msg="shim disconnected" id=8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0 namespace=k8s.io May 16 00:04:13.847083 containerd[1903]: time="2025-05-16T00:04:13.847068598Z" level=warning msg="cleaning up after shim disconnected" id=8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0 namespace=k8s.io May 16 00:04:13.847083 containerd[1903]: time="2025-05-16T00:04:13.847076768Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:04:13.946518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0-rootfs.mount: Deactivated successfully. May 16 00:04:14.033837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291683985.mount: Deactivated successfully. May 16 00:04:14.185119 systemd[1]: Started sshd@8-172.31.20.206:22-139.178.89.65:36554.service - OpenSSH per-connection server daemon (139.178.89.65:36554). May 16 00:04:14.413242 sshd[3933]: Accepted publickey for core from 139.178.89.65 port 36554 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:14.415992 sshd-session[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:14.423358 systemd-logind[1878]: New session 9 of user core. May 16 00:04:14.431153 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:04:14.606428 containerd[1903]: time="2025-05-16T00:04:14.606379470Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:04:14.658130 containerd[1903]: time="2025-05-16T00:04:14.658083763Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\"" May 16 00:04:14.660096 containerd[1903]: time="2025-05-16T00:04:14.660066063Z" level=info msg="StartContainer for \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\"" May 16 00:04:14.708960 systemd[1]: Started cri-containerd-48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe.scope - libcontainer container 48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe. May 16 00:04:14.783020 systemd[1]: cri-containerd-48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe.scope: Deactivated successfully. May 16 00:04:14.790594 containerd[1903]: time="2025-05-16T00:04:14.790340749Z" level=info msg="StartContainer for \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\" returns successfully" May 16 00:04:14.985263 containerd[1903]: time="2025-05-16T00:04:14.985148128Z" level=info msg="shim disconnected" id=48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe namespace=k8s.io May 16 00:04:14.986221 containerd[1903]: time="2025-05-16T00:04:14.986185126Z" level=warning msg="cleaning up after shim disconnected" id=48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe namespace=k8s.io May 16 00:04:14.986221 containerd[1903]: time="2025-05-16T00:04:14.986210811Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:04:15.010408 sshd[3935]: Connection closed by 139.178.89.65 port 36554 May 16 00:04:15.009388 sshd-session[3933]: pam_unix(sshd:session): session closed for user core May 16 00:04:15.015062 systemd[1]: sshd@8-172.31.20.206:22-139.178.89.65:36554.service: Deactivated successfully. May 16 00:04:15.020729 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:04:15.025490 systemd-logind[1878]: Session 9 logged out. Waiting for processes to exit. May 16 00:04:15.029333 systemd-logind[1878]: Removed session 9. May 16 00:04:15.043406 containerd[1903]: time="2025-05-16T00:04:15.043336109Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:04:15.045364 containerd[1903]: time="2025-05-16T00:04:15.045291937Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 00:04:15.047355 containerd[1903]: time="2025-05-16T00:04:15.047293381Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:04:15.048837 containerd[1903]: time="2025-05-16T00:04:15.048801004Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.153395928s" May 16 00:04:15.048961 containerd[1903]: time="2025-05-16T00:04:15.048839060Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 00:04:15.053247 containerd[1903]: time="2025-05-16T00:04:15.053123249Z" level=info msg="CreateContainer within sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:04:15.084881 containerd[1903]: time="2025-05-16T00:04:15.084803323Z" level=info msg="CreateContainer within sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\"" May 16 00:04:15.087413 containerd[1903]: time="2025-05-16T00:04:15.085432682Z" level=info msg="StartContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\"" May 16 00:04:15.124010 systemd[1]: Started cri-containerd-c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f.scope - libcontainer container c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f. May 16 00:04:15.165321 containerd[1903]: time="2025-05-16T00:04:15.165258064Z" level=info msg="StartContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" returns successfully" May 16 00:04:15.614459 containerd[1903]: time="2025-05-16T00:04:15.614364394Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:04:15.648005 containerd[1903]: time="2025-05-16T00:04:15.647952908Z" level=info msg="CreateContainer within sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\"" May 16 00:04:15.649061 containerd[1903]: time="2025-05-16T00:04:15.649002766Z" level=info msg="StartContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\"" May 16 00:04:15.717053 systemd[1]: Started cri-containerd-62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87.scope - libcontainer container 62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87. May 16 00:04:15.742198 kubelet[3141]: I0516 00:04:15.742129 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9b24c" podStartSLOduration=1.880415836 podStartE2EDuration="38.742102528s" podCreationTimestamp="2025-05-16 00:03:37 +0000 UTC" firstStartedPulling="2025-05-16 00:03:38.188419685 +0000 UTC m=+6.872732892" lastFinishedPulling="2025-05-16 00:04:15.050106378 +0000 UTC m=+43.734419584" observedRunningTime="2025-05-16 00:04:15.668346338 +0000 UTC m=+44.352659554" watchObservedRunningTime="2025-05-16 00:04:15.742102528 +0000 UTC m=+44.426415743" May 16 00:04:15.819489 containerd[1903]: time="2025-05-16T00:04:15.819443180Z" level=info msg="StartContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" returns successfully" May 16 00:04:16.335437 kubelet[3141]: I0516 00:04:16.334104 3141 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:04:16.464270 systemd[1]: Created slice kubepods-burstable-podbb074e18_7e67_4dc2_bb6e_03cd9447aca4.slice - libcontainer container kubepods-burstable-podbb074e18_7e67_4dc2_bb6e_03cd9447aca4.slice. May 16 00:04:16.480925 systemd[1]: Created slice kubepods-burstable-pod28718096_9fe8_47d6_b3b6_698dda20313b.slice - libcontainer container kubepods-burstable-pod28718096_9fe8_47d6_b3b6_698dda20313b.slice. May 16 00:04:16.594903 kubelet[3141]: I0516 00:04:16.594535 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrkf\" (UniqueName: \"kubernetes.io/projected/bb074e18-7e67-4dc2-bb6e-03cd9447aca4-kube-api-access-chrkf\") pod \"coredns-668d6bf9bc-wrcvt\" (UID: \"bb074e18-7e67-4dc2-bb6e-03cd9447aca4\") " pod="kube-system/coredns-668d6bf9bc-wrcvt" May 16 00:04:16.595209 kubelet[3141]: I0516 00:04:16.595079 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28718096-9fe8-47d6-b3b6-698dda20313b-config-volume\") pod \"coredns-668d6bf9bc-zwskt\" (UID: \"28718096-9fe8-47d6-b3b6-698dda20313b\") " pod="kube-system/coredns-668d6bf9bc-zwskt" May 16 00:04:16.595209 kubelet[3141]: I0516 00:04:16.595143 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpkg\" (UniqueName: \"kubernetes.io/projected/28718096-9fe8-47d6-b3b6-698dda20313b-kube-api-access-4jpkg\") pod \"coredns-668d6bf9bc-zwskt\" (UID: \"28718096-9fe8-47d6-b3b6-698dda20313b\") " pod="kube-system/coredns-668d6bf9bc-zwskt" May 16 00:04:16.595209 kubelet[3141]: I0516 00:04:16.595165 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb074e18-7e67-4dc2-bb6e-03cd9447aca4-config-volume\") pod \"coredns-668d6bf9bc-wrcvt\" (UID: \"bb074e18-7e67-4dc2-bb6e-03cd9447aca4\") " pod="kube-system/coredns-668d6bf9bc-wrcvt" May 16 00:04:16.634374 kubelet[3141]: I0516 00:04:16.634121 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cx4l7" podStartSLOduration=5.70447072 podStartE2EDuration="39.634104489s" podCreationTimestamp="2025-05-16 00:03:37 +0000 UTC" firstStartedPulling="2025-05-16 00:03:37.965503933 +0000 UTC m=+6.649817137" lastFinishedPulling="2025-05-16 00:04:11.895137714 +0000 UTC m=+40.579450906" observedRunningTime="2025-05-16 00:04:16.631452344 +0000 UTC m=+45.315765549" watchObservedRunningTime="2025-05-16 00:04:16.634104489 +0000 UTC m=+45.318417699" May 16 00:04:16.773582 containerd[1903]: time="2025-05-16T00:04:16.773130658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrcvt,Uid:bb074e18-7e67-4dc2-bb6e-03cd9447aca4,Namespace:kube-system,Attempt:0,}" May 16 00:04:16.787024 containerd[1903]: time="2025-05-16T00:04:16.786977688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwskt,Uid:28718096-9fe8-47d6-b3b6-698dda20313b,Namespace:kube-system,Attempt:0,}" May 16 00:04:18.800655 systemd-networkd[1814]: cilium_host: Link UP May 16 00:04:18.801587 systemd-networkd[1814]: cilium_net: Link UP May 16 00:04:18.801866 systemd-networkd[1814]: cilium_net: Gained carrier May 16 00:04:18.802018 systemd-networkd[1814]: cilium_host: Gained carrier May 16 00:04:18.802116 systemd-networkd[1814]: cilium_net: Gained IPv6LL May 16 00:04:18.802250 systemd-networkd[1814]: cilium_host: Gained IPv6LL May 16 00:04:18.804275 (udev-worker)[4152]: Network interface NamePolicy= disabled on kernel command line. May 16 00:04:18.807238 (udev-worker)[4185]: Network interface NamePolicy= disabled on kernel command line. May 16 00:04:18.938202 systemd-networkd[1814]: cilium_vxlan: Link UP May 16 00:04:18.938214 systemd-networkd[1814]: cilium_vxlan: Gained carrier May 16 00:04:19.509800 kernel: NET: Registered PF_ALG protocol family May 16 00:04:20.046327 systemd[1]: Started sshd@9-172.31.20.206:22-139.178.89.65:40564.service - OpenSSH per-connection server daemon (139.178.89.65:40564). May 16 00:04:20.251952 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 40564 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:20.255120 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:20.262659 systemd-logind[1878]: New session 10 of user core. May 16 00:04:20.270332 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:04:20.322957 systemd-networkd[1814]: lxc_health: Link UP May 16 00:04:20.332824 (udev-worker)[4199]: Network interface NamePolicy= disabled on kernel command line. May 16 00:04:20.342018 systemd-networkd[1814]: lxc_health: Gained carrier May 16 00:04:20.523915 systemd-networkd[1814]: cilium_vxlan: Gained IPv6LL May 16 00:04:20.891835 sshd[4507]: Connection closed by 139.178.89.65 port 40564 May 16 00:04:20.893108 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 16 00:04:20.917147 systemd[1]: sshd@9-172.31.20.206:22-139.178.89.65:40564.service: Deactivated successfully. May 16 00:04:20.920316 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:04:20.921641 systemd-logind[1878]: Session 10 logged out. Waiting for processes to exit. May 16 00:04:20.923437 systemd-logind[1878]: Removed session 10. May 16 00:04:20.959154 systemd-networkd[1814]: lxc1dd0d3f7a1ab: Link UP May 16 00:04:20.968924 kernel: eth0: renamed from tmpf6e4c May 16 00:04:20.973456 systemd-networkd[1814]: lxc09c8137fe32f: Link UP May 16 00:04:20.981305 (udev-worker)[4198]: Network interface NamePolicy= disabled on kernel command line. May 16 00:04:20.987581 kernel: eth0: renamed from tmpc4d4f May 16 00:04:20.984010 systemd-networkd[1814]: lxc1dd0d3f7a1ab: Gained carrier May 16 00:04:20.991897 systemd-networkd[1814]: lxc09c8137fe32f: Gained carrier May 16 00:04:21.545961 systemd-networkd[1814]: lxc_health: Gained IPv6LL May 16 00:04:22.506037 systemd-networkd[1814]: lxc09c8137fe32f: Gained IPv6LL May 16 00:04:23.018857 systemd-networkd[1814]: lxc1dd0d3f7a1ab: Gained IPv6LL May 16 00:04:25.831810 containerd[1903]: time="2025-05-16T00:04:25.831619447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:04:25.834607 containerd[1903]: time="2025-05-16T00:04:25.832353892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:04:25.834607 containerd[1903]: time="2025-05-16T00:04:25.832393316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:04:25.834607 containerd[1903]: time="2025-05-16T00:04:25.832579142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:04:25.895003 containerd[1903]: time="2025-05-16T00:04:25.890931735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:04:25.895003 containerd[1903]: time="2025-05-16T00:04:25.891024046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:04:25.895003 containerd[1903]: time="2025-05-16T00:04:25.891048695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:04:25.897750 containerd[1903]: time="2025-05-16T00:04:25.891185973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:04:25.913825 systemd[1]: Started cri-containerd-c4d4fc0f1fdbef399ac9dac9af43f15746a13a90d8375e50479da19362a25052.scope - libcontainer container c4d4fc0f1fdbef399ac9dac9af43f15746a13a90d8375e50479da19362a25052. May 16 00:04:25.927183 systemd[1]: Started sshd@10-172.31.20.206:22-139.178.89.65:40568.service - OpenSSH per-connection server daemon (139.178.89.65:40568). May 16 00:04:26.037049 systemd[1]: Started cri-containerd-f6e4cf27606182f2a05284a04110257515004a7ef41cfe6d96af0f8b44572b39.scope - libcontainer container f6e4cf27606182f2a05284a04110257515004a7ef41cfe6d96af0f8b44572b39. May 16 00:04:26.123009 containerd[1903]: time="2025-05-16T00:04:26.122965489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwskt,Uid:28718096-9fe8-47d6-b3b6-698dda20313b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d4fc0f1fdbef399ac9dac9af43f15746a13a90d8375e50479da19362a25052\"" May 16 00:04:26.129266 containerd[1903]: time="2025-05-16T00:04:26.129029532Z" level=info msg="CreateContainer within sandbox \"c4d4fc0f1fdbef399ac9dac9af43f15746a13a90d8375e50479da19362a25052\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:04:26.179641 containerd[1903]: time="2025-05-16T00:04:26.179581416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrcvt,Uid:bb074e18-7e67-4dc2-bb6e-03cd9447aca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e4cf27606182f2a05284a04110257515004a7ef41cfe6d96af0f8b44572b39\"" May 16 00:04:26.180504 containerd[1903]: time="2025-05-16T00:04:26.180415215Z" level=info msg="CreateContainer within sandbox \"c4d4fc0f1fdbef399ac9dac9af43f15746a13a90d8375e50479da19362a25052\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5de6190abc90d7a6c46e0d6653551aa820bfd485584bdd096c6af15d2d4fccf\"" May 16 00:04:26.180961 containerd[1903]: time="2025-05-16T00:04:26.180931892Z" level=info msg="StartContainer for \"c5de6190abc90d7a6c46e0d6653551aa820bfd485584bdd096c6af15d2d4fccf\"" May 16 00:04:26.189892 containerd[1903]: time="2025-05-16T00:04:26.189634652Z" level=info msg="CreateContainer within sandbox \"f6e4cf27606182f2a05284a04110257515004a7ef41cfe6d96af0f8b44572b39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:04:26.205803 sshd[4616]: Accepted publickey for core from 139.178.89.65 port 40568 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:26.208775 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:26.223899 systemd-logind[1878]: New session 11 of user core. May 16 00:04:26.226992 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:04:26.240421 containerd[1903]: time="2025-05-16T00:04:26.240370204Z" level=info msg="CreateContainer within sandbox \"f6e4cf27606182f2a05284a04110257515004a7ef41cfe6d96af0f8b44572b39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3207403db3b40b0465b31470854fc41e38279184e1e26d27870c289d02f694b7\"" May 16 00:04:26.242673 containerd[1903]: time="2025-05-16T00:04:26.242052237Z" level=info msg="StartContainer for \"3207403db3b40b0465b31470854fc41e38279184e1e26d27870c289d02f694b7\"" May 16 00:04:26.251236 systemd[1]: Started cri-containerd-c5de6190abc90d7a6c46e0d6653551aa820bfd485584bdd096c6af15d2d4fccf.scope - libcontainer container c5de6190abc90d7a6c46e0d6653551aa820bfd485584bdd096c6af15d2d4fccf. May 16 00:04:26.287004 systemd[1]: Started cri-containerd-3207403db3b40b0465b31470854fc41e38279184e1e26d27870c289d02f694b7.scope - libcontainer container 3207403db3b40b0465b31470854fc41e38279184e1e26d27870c289d02f694b7. May 16 00:04:26.312632 containerd[1903]: time="2025-05-16T00:04:26.312586646Z" level=info msg="StartContainer for \"c5de6190abc90d7a6c46e0d6653551aa820bfd485584bdd096c6af15d2d4fccf\" returns successfully" May 16 00:04:26.339376 containerd[1903]: time="2025-05-16T00:04:26.339214709Z" level=info msg="StartContainer for \"3207403db3b40b0465b31470854fc41e38279184e1e26d27870c289d02f694b7\" returns successfully" May 16 00:04:26.512209 sshd[4666]: Connection closed by 139.178.89.65 port 40568 May 16 00:04:26.512994 sshd-session[4616]: pam_unix(sshd:session): session closed for user core May 16 00:04:26.516835 systemd-logind[1878]: Session 11 logged out. Waiting for processes to exit. May 16 00:04:26.517734 systemd[1]: sshd@10-172.31.20.206:22-139.178.89.65:40568.service: Deactivated successfully. May 16 00:04:26.519703 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:04:26.520836 systemd-logind[1878]: Removed session 11. May 16 00:04:26.551240 systemd[1]: Started sshd@11-172.31.20.206:22-139.178.89.65:52948.service - OpenSSH per-connection server daemon (139.178.89.65:52948). May 16 00:04:26.701065 kubelet[3141]: I0516 00:04:26.700999 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wrcvt" podStartSLOduration=49.700977023 podStartE2EDuration="49.700977023s" podCreationTimestamp="2025-05-16 00:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:04:26.696902474 +0000 UTC m=+55.381215686" watchObservedRunningTime="2025-05-16 00:04:26.700977023 +0000 UTC m=+55.385290234" May 16 00:04:26.701681 kubelet[3141]: I0516 00:04:26.701129 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwskt" podStartSLOduration=49.701121246 podStartE2EDuration="49.701121246s" podCreationTimestamp="2025-05-16 00:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:04:26.678650701 +0000 UTC m=+55.362963913" watchObservedRunningTime="2025-05-16 00:04:26.701121246 +0000 UTC m=+55.385434460" May 16 00:04:26.716181 sshd[4733]: Accepted publickey for core from 139.178.89.65 port 52948 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:26.717209 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:26.725698 systemd-logind[1878]: New session 12 of user core. May 16 00:04:26.733162 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:04:26.845804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890769207.mount: Deactivated successfully. May 16 00:04:27.143805 sshd[4742]: Connection closed by 139.178.89.65 port 52948 May 16 00:04:27.145451 sshd-session[4733]: pam_unix(sshd:session): session closed for user core May 16 00:04:27.150421 systemd[1]: sshd@11-172.31.20.206:22-139.178.89.65:52948.service: Deactivated successfully. May 16 00:04:27.152228 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:04:27.156823 systemd-logind[1878]: Session 12 logged out. Waiting for processes to exit. May 16 00:04:27.158532 systemd-logind[1878]: Removed session 12. May 16 00:04:27.185136 systemd[1]: Started sshd@12-172.31.20.206:22-139.178.89.65:52962.service - OpenSSH per-connection server daemon (139.178.89.65:52962). May 16 00:04:27.367193 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 52962 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:27.370401 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:27.376069 systemd-logind[1878]: New session 13 of user core. May 16 00:04:27.379954 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:04:27.613920 sshd[4757]: Connection closed by 139.178.89.65 port 52962 May 16 00:04:27.615450 sshd-session[4755]: pam_unix(sshd:session): session closed for user core May 16 00:04:27.619242 systemd-logind[1878]: Session 13 logged out. Waiting for processes to exit. May 16 00:04:27.620147 systemd[1]: sshd@12-172.31.20.206:22-139.178.89.65:52962.service: Deactivated successfully. May 16 00:04:27.622181 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:04:27.623268 systemd-logind[1878]: Removed session 13. May 16 00:04:29.890216 ntpd[1873]: Listen normally on 8 cilium_host 192.168.0.104:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 8 cilium_host 192.168.0.104:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 9 cilium_net [fe80::b800:dfff:fe28:c1e4%4]:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 10 cilium_host [fe80::8c6:65ff:fe9e:9143%5]:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 11 cilium_vxlan [fe80::98cc:e2ff:fef5:bbc8%6]:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 12 lxc_health [fe80::9459:b5ff:fe9f:1c02%8]:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 13 lxc1dd0d3f7a1ab [fe80::b8b8:c3ff:fe65:688e%10]:123 May 16 00:04:29.890610 ntpd[1873]: 16 May 00:04:29 ntpd[1873]: Listen normally on 14 lxc09c8137fe32f [fe80::3cbe:d7ff:fe7d:55a1%12]:123 May 16 00:04:29.890292 ntpd[1873]: Listen normally on 9 cilium_net [fe80::b800:dfff:fe28:c1e4%4]:123 May 16 00:04:29.890347 ntpd[1873]: Listen normally on 10 cilium_host [fe80::8c6:65ff:fe9e:9143%5]:123 May 16 00:04:29.890378 ntpd[1873]: Listen normally on 11 cilium_vxlan [fe80::98cc:e2ff:fef5:bbc8%6]:123 May 16 00:04:29.890408 ntpd[1873]: Listen normally on 12 lxc_health [fe80::9459:b5ff:fe9f:1c02%8]:123 May 16 00:04:29.890444 ntpd[1873]: Listen normally on 13 lxc1dd0d3f7a1ab [fe80::b8b8:c3ff:fe65:688e%10]:123 May 16 00:04:29.890471 ntpd[1873]: Listen normally on 14 lxc09c8137fe32f [fe80::3cbe:d7ff:fe7d:55a1%12]:123 May 16 00:04:32.647827 systemd[1]: Started sshd@13-172.31.20.206:22-139.178.89.65:52976.service - OpenSSH per-connection server daemon (139.178.89.65:52976). May 16 00:04:32.820835 sshd[4776]: Accepted publickey for core from 139.178.89.65 port 52976 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:32.822271 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:32.827142 systemd-logind[1878]: New session 14 of user core. May 16 00:04:32.831956 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:04:33.031495 sshd[4778]: Connection closed by 139.178.89.65 port 52976 May 16 00:04:33.032899 sshd-session[4776]: pam_unix(sshd:session): session closed for user core May 16 00:04:33.037645 systemd-logind[1878]: Session 14 logged out. Waiting for processes to exit. May 16 00:04:33.038757 systemd[1]: sshd@13-172.31.20.206:22-139.178.89.65:52976.service: Deactivated successfully. May 16 00:04:33.041098 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:04:33.042336 systemd-logind[1878]: Removed session 14. May 16 00:04:38.068166 systemd[1]: Started sshd@14-172.31.20.206:22-139.178.89.65:52704.service - OpenSSH per-connection server daemon (139.178.89.65:52704). May 16 00:04:38.237817 sshd[4789]: Accepted publickey for core from 139.178.89.65 port 52704 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:38.240178 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:38.245519 systemd-logind[1878]: New session 15 of user core. May 16 00:04:38.251056 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:04:38.450460 sshd[4791]: Connection closed by 139.178.89.65 port 52704 May 16 00:04:38.452069 sshd-session[4789]: pam_unix(sshd:session): session closed for user core May 16 00:04:38.455888 systemd[1]: sshd@14-172.31.20.206:22-139.178.89.65:52704.service: Deactivated successfully. May 16 00:04:38.458205 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:04:38.459075 systemd-logind[1878]: Session 15 logged out. Waiting for processes to exit. May 16 00:04:38.460154 systemd-logind[1878]: Removed session 15. May 16 00:04:38.489523 systemd[1]: Started sshd@15-172.31.20.206:22-139.178.89.65:52720.service - OpenSSH per-connection server daemon (139.178.89.65:52720). May 16 00:04:38.657440 sshd[4802]: Accepted publickey for core from 139.178.89.65 port 52720 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:38.659136 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:38.667101 systemd-logind[1878]: New session 16 of user core. May 16 00:04:38.671454 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:04:39.370036 sshd[4804]: Connection closed by 139.178.89.65 port 52720 May 16 00:04:39.372065 sshd-session[4802]: pam_unix(sshd:session): session closed for user core May 16 00:04:39.376049 systemd[1]: sshd@15-172.31.20.206:22-139.178.89.65:52720.service: Deactivated successfully. May 16 00:04:39.378919 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:04:39.380643 systemd-logind[1878]: Session 16 logged out. Waiting for processes to exit. May 16 00:04:39.382989 systemd-logind[1878]: Removed session 16. May 16 00:04:39.401500 systemd[1]: Started sshd@16-172.31.20.206:22-139.178.89.65:52722.service - OpenSSH per-connection server daemon (139.178.89.65:52722). May 16 00:04:39.601833 sshd[4815]: Accepted publickey for core from 139.178.89.65 port 52722 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:39.603677 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:39.615724 systemd-logind[1878]: New session 17 of user core. May 16 00:04:39.621993 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:04:40.750898 sshd[4817]: Connection closed by 139.178.89.65 port 52722 May 16 00:04:40.751262 sshd-session[4815]: pam_unix(sshd:session): session closed for user core May 16 00:04:40.757080 systemd-logind[1878]: Session 17 logged out. Waiting for processes to exit. May 16 00:04:40.760402 systemd[1]: sshd@16-172.31.20.206:22-139.178.89.65:52722.service: Deactivated successfully. May 16 00:04:40.764732 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:04:40.767284 systemd-logind[1878]: Removed session 17. May 16 00:04:40.785159 systemd[1]: Started sshd@17-172.31.20.206:22-139.178.89.65:52736.service - OpenSSH per-connection server daemon (139.178.89.65:52736). May 16 00:04:40.940935 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 52736 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:40.942900 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:40.948351 systemd-logind[1878]: New session 18 of user core. May 16 00:04:40.954979 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:04:41.425327 sshd[4838]: Connection closed by 139.178.89.65 port 52736 May 16 00:04:41.426233 sshd-session[4836]: pam_unix(sshd:session): session closed for user core May 16 00:04:41.430312 systemd[1]: sshd@17-172.31.20.206:22-139.178.89.65:52736.service: Deactivated successfully. May 16 00:04:41.433464 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:04:41.434491 systemd-logind[1878]: Session 18 logged out. Waiting for processes to exit. May 16 00:04:41.435960 systemd-logind[1878]: Removed session 18. May 16 00:04:41.465309 systemd[1]: Started sshd@18-172.31.20.206:22-139.178.89.65:52738.service - OpenSSH per-connection server daemon (139.178.89.65:52738). May 16 00:04:41.625410 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 52738 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:41.627007 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:41.632345 systemd-logind[1878]: New session 19 of user core. May 16 00:04:41.639018 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:04:41.848391 sshd[4849]: Connection closed by 139.178.89.65 port 52738 May 16 00:04:41.849991 sshd-session[4847]: pam_unix(sshd:session): session closed for user core May 16 00:04:41.853265 systemd-logind[1878]: Session 19 logged out. Waiting for processes to exit. May 16 00:04:41.853973 systemd[1]: sshd@18-172.31.20.206:22-139.178.89.65:52738.service: Deactivated successfully. May 16 00:04:41.856112 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:04:41.857319 systemd-logind[1878]: Removed session 19. May 16 00:04:46.879879 systemd[1]: Started sshd@19-172.31.20.206:22-139.178.89.65:34342.service - OpenSSH per-connection server daemon (139.178.89.65:34342). May 16 00:04:47.045498 sshd[4860]: Accepted publickey for core from 139.178.89.65 port 34342 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:47.046970 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:47.052267 systemd-logind[1878]: New session 20 of user core. May 16 00:04:47.055973 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:04:47.246692 sshd[4862]: Connection closed by 139.178.89.65 port 34342 May 16 00:04:47.247197 sshd-session[4860]: pam_unix(sshd:session): session closed for user core May 16 00:04:47.251157 systemd-logind[1878]: Session 20 logged out. Waiting for processes to exit. May 16 00:04:47.252240 systemd[1]: sshd@19-172.31.20.206:22-139.178.89.65:34342.service: Deactivated successfully. May 16 00:04:47.257308 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:04:47.262866 systemd-logind[1878]: Removed session 20. May 16 00:04:52.283436 systemd[1]: Started sshd@20-172.31.20.206:22-139.178.89.65:34354.service - OpenSSH per-connection server daemon (139.178.89.65:34354). May 16 00:04:52.460899 sshd[4875]: Accepted publickey for core from 139.178.89.65 port 34354 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:52.462426 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:52.467744 systemd-logind[1878]: New session 21 of user core. May 16 00:04:52.476033 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:04:52.733074 sshd[4877]: Connection closed by 139.178.89.65 port 34354 May 16 00:04:52.734249 sshd-session[4875]: pam_unix(sshd:session): session closed for user core May 16 00:04:52.740255 systemd[1]: sshd@20-172.31.20.206:22-139.178.89.65:34354.service: Deactivated successfully. May 16 00:04:52.743467 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:04:52.744386 systemd-logind[1878]: Session 21 logged out. Waiting for processes to exit. May 16 00:04:52.745837 systemd-logind[1878]: Removed session 21. May 16 00:04:57.769152 systemd[1]: Started sshd@21-172.31.20.206:22-139.178.89.65:36040.service - OpenSSH per-connection server daemon (139.178.89.65:36040). May 16 00:04:57.943921 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 36040 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:04:57.945453 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:04:57.950677 systemd-logind[1878]: New session 22 of user core. May 16 00:04:57.957070 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:04:58.178275 sshd[4892]: Connection closed by 139.178.89.65 port 36040 May 16 00:04:58.179173 sshd-session[4890]: pam_unix(sshd:session): session closed for user core May 16 00:04:58.184250 systemd-logind[1878]: Session 22 logged out. Waiting for processes to exit. May 16 00:04:58.185089 systemd[1]: sshd@21-172.31.20.206:22-139.178.89.65:36040.service: Deactivated successfully. May 16 00:04:58.187196 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:04:58.188885 systemd-logind[1878]: Removed session 22. May 16 00:05:03.216235 systemd[1]: Started sshd@22-172.31.20.206:22-139.178.89.65:36054.service - OpenSSH per-connection server daemon (139.178.89.65:36054). May 16 00:05:03.419826 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 36054 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:05:03.422108 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:03.428127 systemd-logind[1878]: New session 23 of user core. May 16 00:05:03.432998 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:05:04.007422 sshd[4905]: Connection closed by 139.178.89.65 port 36054 May 16 00:05:04.008304 sshd-session[4903]: pam_unix(sshd:session): session closed for user core May 16 00:05:04.019393 systemd[1]: sshd@22-172.31.20.206:22-139.178.89.65:36054.service: Deactivated successfully. May 16 00:05:04.022144 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:05:04.023753 systemd-logind[1878]: Session 23 logged out. Waiting for processes to exit. May 16 00:05:04.035749 systemd[1]: Started sshd@23-172.31.20.206:22-139.178.89.65:36058.service - OpenSSH per-connection server daemon (139.178.89.65:36058). May 16 00:05:04.038265 systemd-logind[1878]: Removed session 23. May 16 00:05:04.215530 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 36058 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:05:04.217192 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:04.222869 systemd-logind[1878]: New session 24 of user core. May 16 00:05:04.231047 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:05:05.874733 containerd[1903]: time="2025-05-16T00:05:05.874648823Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:05:05.898985 containerd[1903]: time="2025-05-16T00:05:05.898250444Z" level=info msg="StopContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" with timeout 30 (s)" May 16 00:05:05.898985 containerd[1903]: time="2025-05-16T00:05:05.898756014Z" level=info msg="StopContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" with timeout 2 (s)" May 16 00:05:05.900066 containerd[1903]: time="2025-05-16T00:05:05.900020682Z" level=info msg="Stop container \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" with signal terminated" May 16 00:05:05.900264 containerd[1903]: time="2025-05-16T00:05:05.900122168Z" level=info msg="Stop container \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" with signal terminated" May 16 00:05:05.932158 systemd[1]: cri-containerd-c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f.scope: Deactivated successfully. May 16 00:05:05.936504 systemd-networkd[1814]: lxc_health: Link DOWN May 16 00:05:05.936512 systemd-networkd[1814]: lxc_health: Lost carrier May 16 00:05:05.961559 systemd[1]: cri-containerd-62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87.scope: Deactivated successfully. May 16 00:05:05.962675 systemd[1]: cri-containerd-62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87.scope: Consumed 8.573s CPU time. May 16 00:05:05.990593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f-rootfs.mount: Deactivated successfully. May 16 00:05:06.015536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87-rootfs.mount: Deactivated successfully. May 16 00:05:06.018847 containerd[1903]: time="2025-05-16T00:05:06.018741765Z" level=info msg="shim disconnected" id=c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f namespace=k8s.io May 16 00:05:06.018847 containerd[1903]: time="2025-05-16T00:05:06.018844266Z" level=warning msg="cleaning up after shim disconnected" id=c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f namespace=k8s.io May 16 00:05:06.020917 containerd[1903]: time="2025-05-16T00:05:06.018858181Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:06.029575 containerd[1903]: time="2025-05-16T00:05:06.029305731Z" level=info msg="shim disconnected" id=62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87 namespace=k8s.io May 16 00:05:06.029575 containerd[1903]: time="2025-05-16T00:05:06.029369521Z" level=warning msg="cleaning up after shim disconnected" id=62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87 namespace=k8s.io May 16 00:05:06.029575 containerd[1903]: time="2025-05-16T00:05:06.029382610Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:06.046682 containerd[1903]: time="2025-05-16T00:05:06.046219667Z" level=info msg="StopContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" returns successfully" May 16 00:05:06.057920 containerd[1903]: time="2025-05-16T00:05:06.057874186Z" level=info msg="StopPodSandbox for \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\"" May 16 00:05:06.058937 containerd[1903]: time="2025-05-16T00:05:06.058901722Z" level=info msg="StopContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" returns successfully" May 16 00:05:06.059665 containerd[1903]: time="2025-05-16T00:05:06.059607016Z" level=info msg="StopPodSandbox for \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\"" May 16 00:05:06.059784 containerd[1903]: time="2025-05-16T00:05:06.059696742Z" level=info msg="Container to stop \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.062026 containerd[1903]: time="2025-05-16T00:05:06.061824161Z" level=info msg="Container to stop \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.062026 containerd[1903]: time="2025-05-16T00:05:06.061909155Z" level=info msg="Container to stop \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.062026 containerd[1903]: time="2025-05-16T00:05:06.061923361Z" level=info msg="Container to stop \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.062026 containerd[1903]: time="2025-05-16T00:05:06.061952534Z" level=info msg="Container to stop \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.062026 containerd[1903]: time="2025-05-16T00:05:06.061969654Z" level=info msg="Container to stop \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:05:06.063415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801-shm.mount: Deactivated successfully. May 16 00:05:06.079955 systemd[1]: cri-containerd-f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b.scope: Deactivated successfully. May 16 00:05:06.083005 systemd[1]: cri-containerd-ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801.scope: Deactivated successfully. May 16 00:05:06.130324 containerd[1903]: time="2025-05-16T00:05:06.130085776Z" level=info msg="shim disconnected" id=ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801 namespace=k8s.io May 16 00:05:06.130324 containerd[1903]: time="2025-05-16T00:05:06.130134224Z" level=warning msg="cleaning up after shim disconnected" id=ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801 namespace=k8s.io May 16 00:05:06.130324 containerd[1903]: time="2025-05-16T00:05:06.130142644Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:06.131936 containerd[1903]: time="2025-05-16T00:05:06.131703001Z" level=info msg="shim disconnected" id=f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b namespace=k8s.io May 16 00:05:06.131936 containerd[1903]: time="2025-05-16T00:05:06.131748741Z" level=warning msg="cleaning up after shim disconnected" id=f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b namespace=k8s.io May 16 00:05:06.131936 containerd[1903]: time="2025-05-16T00:05:06.131758455Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:06.151180 containerd[1903]: time="2025-05-16T00:05:06.150994420Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:05:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 00:05:06.152313 containerd[1903]: time="2025-05-16T00:05:06.152279712Z" level=info msg="TearDown network for sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" successfully" May 16 00:05:06.152313 containerd[1903]: time="2025-05-16T00:05:06.152307392Z" level=info msg="StopPodSandbox for \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" returns successfully" May 16 00:05:06.157263 containerd[1903]: time="2025-05-16T00:05:06.157130430Z" level=info msg="TearDown network for sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" successfully" May 16 00:05:06.157263 containerd[1903]: time="2025-05-16T00:05:06.157162190Z" level=info msg="StopPodSandbox for \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" returns successfully" May 16 00:05:06.284897 kubelet[3141]: I0516 00:05:06.284834 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cni-path\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.284897 kubelet[3141]: I0516 00:05:06.284893 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-run\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.284930 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngn7f\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-kube-api-access-ngn7f\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.284955 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-lib-modules\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.284973 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-cgroup\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.284998 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-etc-cni-netd\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.285018 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-bpf-maps\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.285474 kubelet[3141]: I0516 00:05:06.285047 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k4zz\" (UniqueName: \"kubernetes.io/projected/5e5b6712-56de-427d-a641-012139968840-kube-api-access-4k4zz\") pod \"5e5b6712-56de-427d-a641-012139968840\" (UID: \"5e5b6712-56de-427d-a641-012139968840\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285072 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-xtables-lock\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285102 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85a73a3-d462-43e9-be1d-814c23557f89-clustermesh-secrets\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285123 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-kernel\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285152 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-hubble-tls\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285173 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-net\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286152 kubelet[3141]: I0516 00:05:06.285196 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-hostproc\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286389 kubelet[3141]: I0516 00:05:06.285223 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-config-path\") pod \"a85a73a3-d462-43e9-be1d-814c23557f89\" (UID: \"a85a73a3-d462-43e9-be1d-814c23557f89\") " May 16 00:05:06.286389 kubelet[3141]: I0516 00:05:06.285251 3141 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e5b6712-56de-427d-a641-012139968840-cilium-config-path\") pod \"5e5b6712-56de-427d-a641-012139968840\" (UID: \"5e5b6712-56de-427d-a641-012139968840\") " May 16 00:05:06.304118 kubelet[3141]: I0516 00:05:06.303687 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cni-path" (OuterVolumeSpecName: "cni-path") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.304118 kubelet[3141]: I0516 00:05:06.303792 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.306805 kubelet[3141]: I0516 00:05:06.306701 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.306938 kubelet[3141]: I0516 00:05:06.306919 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-kube-api-access-ngn7f" (OuterVolumeSpecName: "kube-api-access-ngn7f") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "kube-api-access-ngn7f". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:05:06.306980 kubelet[3141]: I0516 00:05:06.306959 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.306980 kubelet[3141]: I0516 00:05:06.306975 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.307033 kubelet[3141]: I0516 00:05:06.306989 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.309862 kubelet[3141]: I0516 00:05:06.308992 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5b6712-56de-427d-a641-012139968840-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e5b6712-56de-427d-a641-012139968840" (UID: "5e5b6712-56de-427d-a641-012139968840"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:05:06.309862 kubelet[3141]: I0516 00:05:06.309052 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.309862 kubelet[3141]: I0516 00:05:06.302221 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5b6712-56de-427d-a641-012139968840-kube-api-access-4k4zz" (OuterVolumeSpecName: "kube-api-access-4k4zz") pod "5e5b6712-56de-427d-a641-012139968840" (UID: "5e5b6712-56de-427d-a641-012139968840"). InnerVolumeSpecName "kube-api-access-4k4zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:05:06.309862 kubelet[3141]: I0516 00:05:06.309084 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.310063 kubelet[3141]: I0516 00:05:06.309099 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.310864 kubelet[3141]: I0516 00:05:06.310837 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85a73a3-d462-43e9-be1d-814c23557f89-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:05:06.311070 kubelet[3141]: I0516 00:05:06.311057 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-hostproc" (OuterVolumeSpecName: "hostproc") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:05:06.311646 kubelet[3141]: I0516 00:05:06.311619 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:05:06.313937 kubelet[3141]: I0516 00:05:06.313740 3141 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a85a73a3-d462-43e9-be1d-814c23557f89" (UID: "a85a73a3-d462-43e9-be1d-814c23557f89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386363 3141 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-etc-cni-netd\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386414 3141 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-bpf-maps\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386429 3141 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4k4zz\" (UniqueName: \"kubernetes.io/projected/5e5b6712-56de-427d-a641-012139968840-kube-api-access-4k4zz\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386445 3141 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-xtables-lock\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386457 3141 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85a73a3-d462-43e9-be1d-814c23557f89-clustermesh-secrets\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386469 3141 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-kernel\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386479 3141 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-hubble-tls\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.386517 kubelet[3141]: I0516 00:05:06.386492 3141 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-host-proc-sys-net\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386503 3141 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-hostproc\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386514 3141 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-config-path\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386526 3141 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e5b6712-56de-427d-a641-012139968840-cilium-config-path\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386539 3141 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cni-path\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386549 3141 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-run\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386561 3141 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngn7f\" (UniqueName: \"kubernetes.io/projected/a85a73a3-d462-43e9-be1d-814c23557f89-kube-api-access-ngn7f\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386572 3141 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-lib-modules\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.387143 kubelet[3141]: I0516 00:05:06.386582 3141 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85a73a3-d462-43e9-be1d-814c23557f89-cilium-cgroup\") on node \"ip-172-31-20-206\" DevicePath \"\"" May 16 00:05:06.563576 kubelet[3141]: E0516 00:05:06.563505 3141 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:05:06.737691 systemd[1]: Removed slice kubepods-besteffort-pod5e5b6712_56de_427d_a641_012139968840.slice - libcontainer container kubepods-besteffort-pod5e5b6712_56de_427d_a641_012139968840.slice. May 16 00:05:06.756117 kubelet[3141]: I0516 00:05:06.756081 3141 scope.go:117] "RemoveContainer" containerID="c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f" May 16 00:05:06.779100 containerd[1903]: time="2025-05-16T00:05:06.778691072Z" level=info msg="RemoveContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\"" May 16 00:05:06.789242 containerd[1903]: time="2025-05-16T00:05:06.789176569Z" level=info msg="RemoveContainer for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" returns successfully" May 16 00:05:06.794243 kubelet[3141]: I0516 00:05:06.793960 3141 scope.go:117] "RemoveContainer" containerID="c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f" May 16 00:05:06.794369 containerd[1903]: time="2025-05-16T00:05:06.794302918Z" level=error msg="ContainerStatus for \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\": not found" May 16 00:05:06.795848 systemd[1]: Removed slice kubepods-burstable-poda85a73a3_d462_43e9_be1d_814c23557f89.slice - libcontainer container kubepods-burstable-poda85a73a3_d462_43e9_be1d_814c23557f89.slice. May 16 00:05:06.795937 systemd[1]: kubepods-burstable-poda85a73a3_d462_43e9_be1d_814c23557f89.slice: Consumed 8.662s CPU time. May 16 00:05:06.813203 kubelet[3141]: E0516 00:05:06.813148 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\": not found" containerID="c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f" May 16 00:05:06.818895 kubelet[3141]: I0516 00:05:06.817819 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f"} err="failed to get container status \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4450b67253807a5c076f1b4b786f166c97f782615f033f50f40fd4709f9153f\": not found" May 16 00:05:06.818895 kubelet[3141]: I0516 00:05:06.817940 3141 scope.go:117] "RemoveContainer" containerID="62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87" May 16 00:05:06.821220 containerd[1903]: time="2025-05-16T00:05:06.820839119Z" level=info msg="RemoveContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\"" May 16 00:05:06.830724 containerd[1903]: time="2025-05-16T00:05:06.830653145Z" level=info msg="RemoveContainer for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" returns successfully" May 16 00:05:06.831406 kubelet[3141]: I0516 00:05:06.831378 3141 scope.go:117] "RemoveContainer" containerID="48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe" May 16 00:05:06.834663 containerd[1903]: time="2025-05-16T00:05:06.833339702Z" level=info msg="RemoveContainer for \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\"" May 16 00:05:06.840633 containerd[1903]: time="2025-05-16T00:05:06.839169406Z" level=info msg="RemoveContainer for \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\" returns successfully" May 16 00:05:06.840687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801-rootfs.mount: Deactivated successfully. May 16 00:05:06.840847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b-rootfs.mount: Deactivated successfully. May 16 00:05:06.840941 systemd[1]: var-lib-kubelet-pods-5e5b6712\x2d56de\x2d427d\x2da641\x2d012139968840-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4k4zz.mount: Deactivated successfully. May 16 00:05:06.841031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b-shm.mount: Deactivated successfully. May 16 00:05:06.841118 systemd[1]: var-lib-kubelet-pods-a85a73a3\x2dd462\x2d43e9\x2dbe1d\x2d814c23557f89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngn7f.mount: Deactivated successfully. May 16 00:05:06.841205 systemd[1]: var-lib-kubelet-pods-a85a73a3\x2dd462\x2d43e9\x2dbe1d\x2d814c23557f89-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:05:06.841294 systemd[1]: var-lib-kubelet-pods-a85a73a3\x2dd462\x2d43e9\x2dbe1d\x2d814c23557f89-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:05:06.841948 kubelet[3141]: I0516 00:05:06.841922 3141 scope.go:117] "RemoveContainer" containerID="8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0" May 16 00:05:06.846240 containerd[1903]: time="2025-05-16T00:05:06.846185828Z" level=info msg="RemoveContainer for \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\"" May 16 00:05:06.852087 containerd[1903]: time="2025-05-16T00:05:06.852023210Z" level=info msg="RemoveContainer for \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\" returns successfully" May 16 00:05:06.852347 kubelet[3141]: I0516 00:05:06.852297 3141 scope.go:117] "RemoveContainer" containerID="ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e" May 16 00:05:06.853627 containerd[1903]: time="2025-05-16T00:05:06.853582415Z" level=info msg="RemoveContainer for \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\"" May 16 00:05:06.859307 containerd[1903]: time="2025-05-16T00:05:06.859255955Z" level=info msg="RemoveContainer for \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\" returns successfully" May 16 00:05:06.859576 kubelet[3141]: I0516 00:05:06.859536 3141 scope.go:117] "RemoveContainer" containerID="788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399" May 16 00:05:06.861000 containerd[1903]: time="2025-05-16T00:05:06.860966520Z" level=info msg="RemoveContainer for \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\"" May 16 00:05:06.866157 containerd[1903]: time="2025-05-16T00:05:06.866111702Z" level=info msg="RemoveContainer for \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\" returns successfully" May 16 00:05:06.866405 kubelet[3141]: I0516 00:05:06.866376 3141 scope.go:117] "RemoveContainer" containerID="62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87" May 16 00:05:06.866613 containerd[1903]: time="2025-05-16T00:05:06.866580727Z" level=error msg="ContainerStatus for \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\": not found" May 16 00:05:06.866716 kubelet[3141]: E0516 00:05:06.866694 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\": not found" containerID="62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87" May 16 00:05:06.866772 kubelet[3141]: I0516 00:05:06.866721 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87"} err="failed to get container status \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\": rpc error: code = NotFound desc = an error occurred when try to find container \"62af58ee01866c3adb5daba0394e87ccb27fed77f8f8257b845793c355dd6c87\": not found" May 16 00:05:06.866772 kubelet[3141]: I0516 00:05:06.866740 3141 scope.go:117] "RemoveContainer" containerID="48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe" May 16 00:05:06.867021 containerd[1903]: time="2025-05-16T00:05:06.866949374Z" level=error msg="ContainerStatus for \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\": not found" May 16 00:05:06.867096 kubelet[3141]: E0516 00:05:06.867074 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\": not found" containerID="48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe" May 16 00:05:06.867096 kubelet[3141]: I0516 00:05:06.867091 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe"} err="failed to get container status \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"48e2e7601e6478a20ae22a1acf75b194a1f731e8810bbd6a1d11167724f454fe\": not found" May 16 00:05:06.867360 kubelet[3141]: I0516 00:05:06.867103 3141 scope.go:117] "RemoveContainer" containerID="8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0" May 16 00:05:06.867360 kubelet[3141]: E0516 00:05:06.867343 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\": not found" containerID="8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0" May 16 00:05:06.867445 containerd[1903]: time="2025-05-16T00:05:06.867244346Z" level=error msg="ContainerStatus for \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\": not found" May 16 00:05:06.867475 kubelet[3141]: I0516 00:05:06.867358 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0"} err="failed to get container status \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"8703e845aed32690885a87e6098fe25f71c8fe2adeae8349143ee518e81b34f0\": not found" May 16 00:05:06.867475 kubelet[3141]: I0516 00:05:06.867370 3141 scope.go:117] "RemoveContainer" containerID="ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e" May 16 00:05:06.867528 containerd[1903]: time="2025-05-16T00:05:06.867483015Z" level=error msg="ContainerStatus for \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\": not found" May 16 00:05:06.867611 kubelet[3141]: E0516 00:05:06.867588 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\": not found" containerID="ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e" May 16 00:05:06.867665 kubelet[3141]: I0516 00:05:06.867649 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e"} err="failed to get container status \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef068b52091de4eb33da08d9291ff24c0a45e64ea68f852d193a86f328f0397e\": not found" May 16 00:05:06.867695 kubelet[3141]: I0516 00:05:06.867665 3141 scope.go:117] "RemoveContainer" containerID="788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399" May 16 00:05:06.867922 containerd[1903]: time="2025-05-16T00:05:06.867848246Z" level=error msg="ContainerStatus for \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\": not found" May 16 00:05:06.868107 kubelet[3141]: E0516 00:05:06.868078 3141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\": not found" containerID="788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399" May 16 00:05:06.868107 kubelet[3141]: I0516 00:05:06.868105 3141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399"} err="failed to get container status \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\": rpc error: code = NotFound desc = an error occurred when try to find container \"788e95aed4c3a7699a404ca827787de1719a3e02a5fc439b1c71537bb7200399\": not found" May 16 00:05:07.444073 kubelet[3141]: I0516 00:05:07.444000 3141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5b6712-56de-427d-a641-012139968840" path="/var/lib/kubelet/pods/5e5b6712-56de-427d-a641-012139968840/volumes" May 16 00:05:07.444679 kubelet[3141]: I0516 00:05:07.444636 3141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85a73a3-d462-43e9-be1d-814c23557f89" path="/var/lib/kubelet/pods/a85a73a3-d462-43e9-be1d-814c23557f89/volumes" May 16 00:05:07.795047 sshd[4918]: Connection closed by 139.178.89.65 port 36058 May 16 00:05:07.797516 sshd-session[4916]: pam_unix(sshd:session): session closed for user core May 16 00:05:07.806032 systemd[1]: sshd@23-172.31.20.206:22-139.178.89.65:36058.service: Deactivated successfully. May 16 00:05:07.808471 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:05:07.810412 systemd-logind[1878]: Session 24 logged out. Waiting for processes to exit. May 16 00:05:07.811669 systemd-logind[1878]: Removed session 24. May 16 00:05:07.830590 systemd[1]: Started sshd@24-172.31.20.206:22-139.178.89.65:42102.service - OpenSSH per-connection server daemon (139.178.89.65:42102). May 16 00:05:08.011922 sshd[5080]: Accepted publickey for core from 139.178.89.65 port 42102 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:05:08.014856 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:08.026837 systemd-logind[1878]: New session 25 of user core. May 16 00:05:08.032018 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:05:08.890196 ntpd[1873]: Deleting interface #12 lxc_health, fe80::9459:b5ff:fe9f:1c02%8#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs May 16 00:05:08.890634 ntpd[1873]: 16 May 00:05:08 ntpd[1873]: Deleting interface #12 lxc_health, fe80::9459:b5ff:fe9f:1c02%8#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs May 16 00:05:08.898333 sshd[5082]: Connection closed by 139.178.89.65 port 42102 May 16 00:05:08.900015 sshd-session[5080]: pam_unix(sshd:session): session closed for user core May 16 00:05:08.906094 systemd[1]: sshd@24-172.31.20.206:22-139.178.89.65:42102.service: Deactivated successfully. May 16 00:05:08.910547 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:05:08.914674 systemd-logind[1878]: Session 25 logged out. Waiting for processes to exit. May 16 00:05:08.919323 systemd-logind[1878]: Removed session 25. May 16 00:05:08.921941 kubelet[3141]: I0516 00:05:08.920987 3141 memory_manager.go:355] "RemoveStaleState removing state" podUID="5e5b6712-56de-427d-a641-012139968840" containerName="cilium-operator" May 16 00:05:08.921941 kubelet[3141]: I0516 00:05:08.921050 3141 memory_manager.go:355] "RemoveStaleState removing state" podUID="a85a73a3-d462-43e9-be1d-814c23557f89" containerName="cilium-agent" May 16 00:05:08.957915 systemd[1]: Started sshd@25-172.31.20.206:22-139.178.89.65:42118.service - OpenSSH per-connection server daemon (139.178.89.65:42118). May 16 00:05:08.991647 systemd[1]: Created slice kubepods-burstable-podc0f874c3_8cd6_4ef9_9872_6d3b379ad7d1.slice - libcontainer container kubepods-burstable-podc0f874c3_8cd6_4ef9_9872_6d3b379ad7d1.slice. May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029493 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-lib-modules\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029543 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-xtables-lock\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029571 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-cilium-cgroup\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029597 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-host-proc-sys-kernel\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029622 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-cni-path\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.029888 kubelet[3141]: I0516 00:05:09.029647 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-cilium-ipsec-secrets\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029673 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-cilium-run\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029698 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-hostproc\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029722 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-clustermesh-secrets\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029744 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-cilium-config-path\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029782 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-host-proc-sys-net\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030253 kubelet[3141]: I0516 00:05:09.029806 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-hubble-tls\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030487 kubelet[3141]: I0516 00:05:09.029831 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-etc-cni-netd\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030487 kubelet[3141]: I0516 00:05:09.029858 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwhw\" (UniqueName: \"kubernetes.io/projected/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-kube-api-access-kgwhw\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.030487 kubelet[3141]: I0516 00:05:09.029884 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1-bpf-maps\") pod \"cilium-8rjpm\" (UID: \"c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1\") " pod="kube-system/cilium-8rjpm" May 16 00:05:09.154150 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 42118 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:05:09.168780 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:09.192054 systemd-logind[1878]: New session 26 of user core. May 16 00:05:09.200983 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:05:09.332810 sshd[5100]: Connection closed by 139.178.89.65 port 42118 May 16 00:05:09.334663 sshd-session[5094]: pam_unix(sshd:session): session closed for user core May 16 00:05:09.338800 containerd[1903]: time="2025-05-16T00:05:09.338684928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rjpm,Uid:c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1,Namespace:kube-system,Attempt:0,}" May 16 00:05:09.339259 systemd[1]: sshd@25-172.31.20.206:22-139.178.89.65:42118.service: Deactivated successfully. May 16 00:05:09.344384 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:05:09.345729 systemd-logind[1878]: Session 26 logged out. Waiting for processes to exit. May 16 00:05:09.347300 systemd-logind[1878]: Removed session 26. May 16 00:05:09.381372 systemd[1]: Started sshd@26-172.31.20.206:22-139.178.89.65:42122.service - OpenSSH per-connection server daemon (139.178.89.65:42122). May 16 00:05:09.390831 containerd[1903]: time="2025-05-16T00:05:09.390510000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:05:09.390831 containerd[1903]: time="2025-05-16T00:05:09.390709643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:05:09.391391 containerd[1903]: time="2025-05-16T00:05:09.390804171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:05:09.391807 containerd[1903]: time="2025-05-16T00:05:09.391674305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:05:09.420217 systemd[1]: Started cri-containerd-7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b.scope - libcontainer container 7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b. May 16 00:05:09.459851 containerd[1903]: time="2025-05-16T00:05:09.458248912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rjpm,Uid:c0f874c3-8cd6-4ef9-9872-6d3b379ad7d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\"" May 16 00:05:09.463667 containerd[1903]: time="2025-05-16T00:05:09.463615591Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:05:09.489531 containerd[1903]: time="2025-05-16T00:05:09.489471515Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc\"" May 16 00:05:09.491324 containerd[1903]: time="2025-05-16T00:05:09.490533252Z" level=info msg="StartContainer for \"3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc\"" May 16 00:05:09.525001 systemd[1]: Started cri-containerd-3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc.scope - libcontainer container 3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc. May 16 00:05:09.557621 containerd[1903]: time="2025-05-16T00:05:09.557575892Z" level=info msg="StartContainer for \"3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc\" returns successfully" May 16 00:05:09.563934 sshd[5114]: Accepted publickey for core from 139.178.89.65 port 42122 ssh2: RSA SHA256:Rm8vot4buv8m3t9UZx/JkaJKik9XcAFOGb8J2kBvbpg May 16 00:05:09.566355 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:09.573170 systemd-logind[1878]: New session 27 of user core. May 16 00:05:09.579160 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:05:09.579556 systemd[1]: cri-containerd-3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc.scope: Deactivated successfully. May 16 00:05:09.635522 containerd[1903]: time="2025-05-16T00:05:09.635445487Z" level=info msg="shim disconnected" id=3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc namespace=k8s.io May 16 00:05:09.635522 containerd[1903]: time="2025-05-16T00:05:09.635516181Z" level=warning msg="cleaning up after shim disconnected" id=3624a3d0dd19fc32817b8a7db26de2a3f4b1b29ab7c3c07f04b639dd9aaab8dc namespace=k8s.io May 16 00:05:09.635522 containerd[1903]: time="2025-05-16T00:05:09.635528096Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:09.781635 containerd[1903]: time="2025-05-16T00:05:09.781436615Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:05:09.804347 containerd[1903]: time="2025-05-16T00:05:09.804302852Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4\"" May 16 00:05:09.807317 containerd[1903]: time="2025-05-16T00:05:09.807280641Z" level=info msg="StartContainer for \"55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4\"" May 16 00:05:09.834991 systemd[1]: Started cri-containerd-55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4.scope - libcontainer container 55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4. May 16 00:05:09.866569 containerd[1903]: time="2025-05-16T00:05:09.866448844Z" level=info msg="StartContainer for \"55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4\" returns successfully" May 16 00:05:09.877124 systemd[1]: cri-containerd-55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4.scope: Deactivated successfully. May 16 00:05:09.914457 containerd[1903]: time="2025-05-16T00:05:09.914322443Z" level=info msg="shim disconnected" id=55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4 namespace=k8s.io May 16 00:05:09.914457 containerd[1903]: time="2025-05-16T00:05:09.914394502Z" level=warning msg="cleaning up after shim disconnected" id=55e151050ed7c19200977090f17226a65c5d2e61252d445dc2613d65936ad6c4 namespace=k8s.io May 16 00:05:09.914457 containerd[1903]: time="2025-05-16T00:05:09.914403229Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:10.790660 containerd[1903]: time="2025-05-16T00:05:10.790264429Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:05:10.820170 containerd[1903]: time="2025-05-16T00:05:10.820118072Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817\"" May 16 00:05:10.821707 containerd[1903]: time="2025-05-16T00:05:10.821661671Z" level=info msg="StartContainer for \"39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817\"" May 16 00:05:10.865958 systemd[1]: Started cri-containerd-39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817.scope - libcontainer container 39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817. May 16 00:05:10.904838 containerd[1903]: time="2025-05-16T00:05:10.904694001Z" level=info msg="StartContainer for \"39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817\" returns successfully" May 16 00:05:10.913531 systemd[1]: cri-containerd-39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817.scope: Deactivated successfully. May 16 00:05:10.967539 containerd[1903]: time="2025-05-16T00:05:10.967281656Z" level=info msg="shim disconnected" id=39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817 namespace=k8s.io May 16 00:05:10.967539 containerd[1903]: time="2025-05-16T00:05:10.967342907Z" level=warning msg="cleaning up after shim disconnected" id=39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817 namespace=k8s.io May 16 00:05:10.967539 containerd[1903]: time="2025-05-16T00:05:10.967352848Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:11.139640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ecc8cb04f52756f4dd092b44dba9f1ec05eb5259e261f4164026ec7867d817-rootfs.mount: Deactivated successfully. May 16 00:05:11.565057 kubelet[3141]: E0516 00:05:11.565016 3141 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:05:11.798160 containerd[1903]: time="2025-05-16T00:05:11.798115335Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:05:11.829523 containerd[1903]: time="2025-05-16T00:05:11.829073969Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852\"" May 16 00:05:11.831204 containerd[1903]: time="2025-05-16T00:05:11.831167629Z" level=info msg="StartContainer for \"f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852\"" May 16 00:05:11.873092 systemd[1]: Started cri-containerd-f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852.scope - libcontainer container f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852. May 16 00:05:11.911813 systemd[1]: cri-containerd-f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852.scope: Deactivated successfully. May 16 00:05:11.917818 containerd[1903]: time="2025-05-16T00:05:11.917315837Z" level=info msg="StartContainer for \"f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852\" returns successfully" May 16 00:05:11.960693 containerd[1903]: time="2025-05-16T00:05:11.960426976Z" level=info msg="shim disconnected" id=f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852 namespace=k8s.io May 16 00:05:11.960693 containerd[1903]: time="2025-05-16T00:05:11.960699496Z" level=warning msg="cleaning up after shim disconnected" id=f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852 namespace=k8s.io May 16 00:05:11.960929 containerd[1903]: time="2025-05-16T00:05:11.960710080Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:11.975439 containerd[1903]: time="2025-05-16T00:05:11.975381489Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:05:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 00:05:12.139727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8a96b5abd3770b5e82c06e7bff82a71592b8e8a6674d578769585ecad048852-rootfs.mount: Deactivated successfully. May 16 00:05:12.800002 containerd[1903]: time="2025-05-16T00:05:12.799939029Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:05:12.826131 containerd[1903]: time="2025-05-16T00:05:12.826068442Z" level=info msg="CreateContainer within sandbox \"7a717b53fc1df323dbf4d83725324c74f74579a82d7258239f5d529fed58981b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2\"" May 16 00:05:12.826702 containerd[1903]: time="2025-05-16T00:05:12.826632011Z" level=info msg="StartContainer for \"f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2\"" May 16 00:05:12.863984 systemd[1]: Started cri-containerd-f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2.scope - libcontainer container f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2. May 16 00:05:12.899243 containerd[1903]: time="2025-05-16T00:05:12.899198308Z" level=info msg="StartContainer for \"f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2\" returns successfully" May 16 00:05:13.139996 systemd[1]: run-containerd-runc-k8s.io-f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2-runc.SdkI1N.mount: Deactivated successfully. May 16 00:05:13.510158 kubelet[3141]: I0516 00:05:13.508006 3141 setters.go:602] "Node became not ready" node="ip-172-31-20-206" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:05:13Z","lastTransitionTime":"2025-05-16T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:05:13.592857 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:05:13.820041 kubelet[3141]: I0516 00:05:13.819409 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8rjpm" podStartSLOduration=5.819391256 podStartE2EDuration="5.819391256s" podCreationTimestamp="2025-05-16 00:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:05:13.819040709 +0000 UTC m=+102.503353918" watchObservedRunningTime="2025-05-16 00:05:13.819391256 +0000 UTC m=+102.503704468" May 16 00:05:16.592025 systemd-networkd[1814]: lxc_health: Link UP May 16 00:05:16.598556 (udev-worker)[5975]: Network interface NamePolicy= disabled on kernel command line. May 16 00:05:16.599630 systemd-networkd[1814]: lxc_health: Gained carrier May 16 00:05:17.802031 systemd-networkd[1814]: lxc_health: Gained IPv6LL May 16 00:05:18.828070 systemd[1]: run-containerd-runc-k8s.io-f99b99d3aae54d7ea58a47f0d856e55ceb333aa88681da48bf73a9c8841d2bf2-runc.crq82q.mount: Deactivated successfully. May 16 00:05:19.890266 ntpd[1873]: Listen normally on 15 lxc_health [fe80::1860:6ff:fe45:1c5a%14]:123 May 16 00:05:19.890687 ntpd[1873]: 16 May 00:05:19 ntpd[1873]: Listen normally on 15 lxc_health [fe80::1860:6ff:fe45:1c5a%14]:123 May 16 00:05:23.198098 kubelet[3141]: E0516 00:05:23.196607 3141 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38528->127.0.0.1:45385: write tcp 127.0.0.1:38528->127.0.0.1:45385: write: broken pipe May 16 00:05:25.464745 sshd[5186]: Connection closed by 139.178.89.65 port 42122 May 16 00:05:25.465654 sshd-session[5114]: pam_unix(sshd:session): session closed for user core May 16 00:05:25.470843 systemd-logind[1878]: Session 27 logged out. Waiting for processes to exit. May 16 00:05:25.471605 systemd[1]: sshd@26-172.31.20.206:22-139.178.89.65:42122.service: Deactivated successfully. May 16 00:05:25.474814 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:05:25.476078 systemd-logind[1878]: Removed session 27. May 16 00:05:31.479703 containerd[1903]: time="2025-05-16T00:05:31.479650722Z" level=info msg="StopPodSandbox for \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\"" May 16 00:05:31.480125 containerd[1903]: time="2025-05-16T00:05:31.479743261Z" level=info msg="TearDown network for sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" successfully" May 16 00:05:31.480125 containerd[1903]: time="2025-05-16T00:05:31.479753866Z" level=info msg="StopPodSandbox for \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" returns successfully" May 16 00:05:31.482178 containerd[1903]: time="2025-05-16T00:05:31.480744756Z" level=info msg="RemovePodSandbox for \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\"" May 16 00:05:31.482178 containerd[1903]: time="2025-05-16T00:05:31.480802490Z" level=info msg="Forcibly stopping sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\"" May 16 00:05:31.482178 containerd[1903]: time="2025-05-16T00:05:31.480859922Z" level=info msg="TearDown network for sandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" successfully" May 16 00:05:31.487261 containerd[1903]: time="2025-05-16T00:05:31.487204572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:05:31.487414 containerd[1903]: time="2025-05-16T00:05:31.487284465Z" level=info msg="RemovePodSandbox \"f2e94d5c33adaa2f8d6ceadadfdc1dfadabc288bdd7d22542ffa7ee024248c5b\" returns successfully" May 16 00:05:31.487852 containerd[1903]: time="2025-05-16T00:05:31.487808203Z" level=info msg="StopPodSandbox for \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\"" May 16 00:05:31.487971 containerd[1903]: time="2025-05-16T00:05:31.487895710Z" level=info msg="TearDown network for sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" successfully" May 16 00:05:31.487971 containerd[1903]: time="2025-05-16T00:05:31.487905270Z" level=info msg="StopPodSandbox for \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" returns successfully" May 16 00:05:31.488403 containerd[1903]: time="2025-05-16T00:05:31.488336480Z" level=info msg="RemovePodSandbox for \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\"" May 16 00:05:31.488403 containerd[1903]: time="2025-05-16T00:05:31.488391067Z" level=info msg="Forcibly stopping sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\"" May 16 00:05:31.488828 containerd[1903]: time="2025-05-16T00:05:31.488588764Z" level=info msg="TearDown network for sandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" successfully" May 16 00:05:31.493972 containerd[1903]: time="2025-05-16T00:05:31.493926722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:05:31.494116 containerd[1903]: time="2025-05-16T00:05:31.493989402Z" level=info msg="RemovePodSandbox \"ec89a0e66aa7100ff5c511d0a401033c7f9caa5d72c193daec1e6be221759801\" returns successfully" May 16 00:05:40.412252 systemd[1]: cri-containerd-498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d.scope: Deactivated successfully. May 16 00:05:40.413074 systemd[1]: cri-containerd-498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d.scope: Consumed 3.281s CPU time, 28.0M memory peak, 0B memory swap peak. May 16 00:05:40.441703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d-rootfs.mount: Deactivated successfully. May 16 00:05:40.476049 containerd[1903]: time="2025-05-16T00:05:40.475980590Z" level=info msg="shim disconnected" id=498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d namespace=k8s.io May 16 00:05:40.476049 containerd[1903]: time="2025-05-16T00:05:40.476038354Z" level=warning msg="cleaning up after shim disconnected" id=498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d namespace=k8s.io May 16 00:05:40.476049 containerd[1903]: time="2025-05-16T00:05:40.476050893Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:40.877265 kubelet[3141]: I0516 00:05:40.877172 3141 scope.go:117] "RemoveContainer" containerID="498eeb94a677186451556a7cb4e2bd9e60b5cd8503db9b19237b590ba4fd053d" May 16 00:05:40.884079 containerd[1903]: time="2025-05-16T00:05:40.883956611Z" level=info msg="CreateContainer within sandbox \"a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 16 00:05:40.905381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151394367.mount: Deactivated successfully. May 16 00:05:40.915552 containerd[1903]: time="2025-05-16T00:05:40.915451157Z" level=info msg="CreateContainer within sandbox \"a1baedf124beadbed645c554eab7166e45cb7a20a83475c0adb41ffd983f5ef7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"589fe0a964a3f83f9ee702ce4fba961fe5149ed36e03e4836c4e06db490db550\"" May 16 00:05:40.916020 containerd[1903]: time="2025-05-16T00:05:40.915983329Z" level=info msg="StartContainer for \"589fe0a964a3f83f9ee702ce4fba961fe5149ed36e03e4836c4e06db490db550\"" May 16 00:05:40.956027 systemd[1]: Started cri-containerd-589fe0a964a3f83f9ee702ce4fba961fe5149ed36e03e4836c4e06db490db550.scope - libcontainer container 589fe0a964a3f83f9ee702ce4fba961fe5149ed36e03e4836c4e06db490db550. May 16 00:05:41.013793 containerd[1903]: time="2025-05-16T00:05:41.013723001Z" level=info msg="StartContainer for \"589fe0a964a3f83f9ee702ce4fba961fe5149ed36e03e4836c4e06db490db550\" returns successfully" May 16 00:05:44.473021 systemd[1]: cri-containerd-e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9.scope: Deactivated successfully. May 16 00:05:44.473315 systemd[1]: cri-containerd-e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9.scope: Consumed 2.027s CPU time, 20.3M memory peak, 0B memory swap peak. May 16 00:05:44.502700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9-rootfs.mount: Deactivated successfully. May 16 00:05:44.525860 containerd[1903]: time="2025-05-16T00:05:44.525752646Z" level=info msg="shim disconnected" id=e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9 namespace=k8s.io May 16 00:05:44.525860 containerd[1903]: time="2025-05-16T00:05:44.525835815Z" level=warning msg="cleaning up after shim disconnected" id=e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9 namespace=k8s.io May 16 00:05:44.525860 containerd[1903]: time="2025-05-16T00:05:44.525845027Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:05:44.676008 kubelet[3141]: E0516 00:05:44.675945 3141 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": context deadline exceeded" May 16 00:05:44.888125 kubelet[3141]: I0516 00:05:44.888101 3141 scope.go:117] "RemoveContainer" containerID="e2315146917f35ef24eca4d9464cde423df749967837bbaa9df893d6eb1585c9" May 16 00:05:44.890068 containerd[1903]: time="2025-05-16T00:05:44.889978218Z" level=info msg="CreateContainer within sandbox \"18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 16 00:05:44.915228 containerd[1903]: time="2025-05-16T00:05:44.915160948Z" level=info msg="CreateContainer within sandbox \"18c45c6b1aa6836943ad45255d1fa801966243af3fb30fb358c9c490532bf7cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"254aa22828b7da9d65146c0b5e04ffecb324f97459a89693e66a0d044edaf8e2\"" May 16 00:05:44.915723 containerd[1903]: time="2025-05-16T00:05:44.915688247Z" level=info msg="StartContainer for \"254aa22828b7da9d65146c0b5e04ffecb324f97459a89693e66a0d044edaf8e2\"" May 16 00:05:44.951984 systemd[1]: Started cri-containerd-254aa22828b7da9d65146c0b5e04ffecb324f97459a89693e66a0d044edaf8e2.scope - libcontainer container 254aa22828b7da9d65146c0b5e04ffecb324f97459a89693e66a0d044edaf8e2. May 16 00:05:45.005444 containerd[1903]: time="2025-05-16T00:05:45.005388681Z" level=info msg="StartContainer for \"254aa22828b7da9d65146c0b5e04ffecb324f97459a89693e66a0d044edaf8e2\" returns successfully" May 16 00:05:54.677166 kubelet[3141]: E0516 00:05:54.677018 3141 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-206?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"