Apr 17 23:35:18.941843 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:35:18.941878 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:18.941898 kernel: BIOS-provided physical RAM map: Apr 17 23:35:18.941910 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:35:18.941921 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:35:18.941933 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:35:18.941946 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:35:18.941959 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:35:18.941972 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:35:18.941986 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:35:18.941998 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:35:18.942011 kernel: NX (Execute Disable) protection: active Apr 17 23:35:18.942023 kernel: APIC: Static calls initialized Apr 17 23:35:18.942035 kernel: efi: EFI v2.7 by EDK II Apr 17 23:35:18.942050 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:35:18.942067 kernel: SMBIOS 2.7 present. Apr 17 23:35:18.942080 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:35:18.942092 kernel: Hypervisor detected: KVM Apr 17 23:35:18.942105 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:35:18.942120 kernel: kvm-clock: using sched offset of 4461049849 cycles Apr 17 23:35:18.942134 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:35:18.942149 kernel: tsc: Detected 2499.996 MHz processor Apr 17 23:35:18.942162 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:35:18.942176 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:35:18.942188 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:35:18.942204 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:35:18.942228 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:35:18.942241 kernel: Using GB pages for direct mapping Apr 17 23:35:18.942252 kernel: Secure boot disabled Apr 17 23:35:18.942264 kernel: ACPI: Early table checksum verification disabled Apr 17 23:35:18.942277 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:35:18.942289 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:35:18.942302 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:35:18.942315 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:35:18.942331 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:35:18.942343 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:35:18.942369 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:35:18.942383 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:35:18.942396 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:35:18.942409 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:35:18.942429 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:35:18.942446 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:35:18.942461 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:35:18.942477 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:35:18.942492 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:35:18.942507 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:35:18.942522 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:35:18.942540 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:35:18.942555 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:35:18.942570 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:35:18.942585 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:35:18.942600 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:35:18.942615 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:35:18.942630 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:35:18.942645 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:35:18.942660 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:35:18.942675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:35:18.942694 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:35:18.942708 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:35:18.942723 kernel: Zone ranges: Apr 17 23:35:18.942738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:35:18.942753 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:35:18.942768 kernel: Normal empty Apr 17 23:35:18.942783 kernel: Movable zone start for each node Apr 17 23:35:18.942799 kernel: Early memory node ranges Apr 17 23:35:18.942814 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:35:18.942832 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:35:18.942848 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:35:18.942863 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:35:18.942878 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:35:18.942893 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:35:18.942909 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:35:18.942924 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:35:18.942939 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:35:18.942953 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:35:18.942971 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:35:18.942985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:35:18.942999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:35:18.943014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:35:18.943028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:35:18.943042 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:35:18.943057 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:35:18.943072 kernel: TSC deadline timer available Apr 17 23:35:18.943086 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:35:18.943103 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:35:18.943118 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:35:18.943132 kernel: Booting paravirtualized kernel on KVM Apr 17 23:35:18.943146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:35:18.943161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:35:18.943175 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:35:18.943189 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:35:18.943203 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:35:18.943217 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:35:18.943231 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:35:18.943250 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:18.943265 kernel: random: crng init done Apr 17 23:35:18.943279 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:35:18.943294 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:35:18.943308 kernel: Fallback order for Node 0: 0 Apr 17 23:35:18.943323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:35:18.943337 kernel: Policy zone: DMA32 Apr 17 23:35:18.943352 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:35:18.943384 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:35:18.943399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:35:18.943413 kernel: Kernel/User page tables isolation: enabled Apr 17 23:35:18.943427 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:35:18.943441 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:35:18.943455 kernel: Dynamic Preempt: voluntary Apr 17 23:35:18.943469 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:35:18.943484 kernel: rcu: RCU event tracing is enabled. Apr 17 23:35:18.943499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:35:18.943516 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:35:18.943530 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:35:18.943544 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:35:18.943558 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:35:18.943573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:35:18.943587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:35:18.943602 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:35:18.943630 kernel: Console: colour dummy device 80x25 Apr 17 23:35:18.943644 kernel: printk: console [tty0] enabled Apr 17 23:35:18.943659 kernel: printk: console [ttyS0] enabled Apr 17 23:35:18.943674 kernel: ACPI: Core revision 20230628 Apr 17 23:35:18.943689 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:35:18.943707 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:35:18.943722 kernel: x2apic enabled Apr 17 23:35:18.943737 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:35:18.943753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:35:18.943771 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 17 23:35:18.943786 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:35:18.943801 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:35:18.943816 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:35:18.943831 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:35:18.943846 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:35:18.943860 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:35:18.943876 kernel: RETBleed: Vulnerable Apr 17 23:35:18.943890 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:35:18.943905 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:35:18.943920 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:35:18.943937 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:35:18.943952 kernel: active return thunk: its_return_thunk Apr 17 23:35:18.943967 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:35:18.943982 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:35:18.943997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:35:18.944012 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:35:18.944027 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:35:18.944042 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:35:18.944057 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:35:18.944072 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:35:18.944087 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:35:18.944106 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:35:18.944120 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:35:18.944135 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:35:18.944150 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:35:18.944165 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:35:18.944180 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:35:18.944195 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:35:18.944210 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:35:18.944225 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:35:18.944240 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:35:18.944254 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:35:18.944269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:35:18.944283 kernel: landlock: Up and running. Apr 17 23:35:18.944297 kernel: SELinux: Initializing. Apr 17 23:35:18.944313 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:35:18.944328 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:35:18.944343 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:35:18.944387 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:18.944404 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:18.944420 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:18.944440 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:35:18.944468 kernel: signal: max sigframe size: 3632 Apr 17 23:35:18.944493 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:35:18.944510 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:35:18.944526 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:35:18.944542 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:35:18.944559 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:35:18.944575 kernel: .... node #0, CPUs: #1 Apr 17 23:35:18.944592 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:35:18.944609 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:35:18.944629 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:35:18.944645 kernel: smpboot: Max logical packages: 1 Apr 17 23:35:18.944661 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 17 23:35:18.944678 kernel: devtmpfs: initialized Apr 17 23:35:18.944694 kernel: x86/mm: Memory block size: 128MB Apr 17 23:35:18.944710 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:35:18.944727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:35:18.944743 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:35:18.944760 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:35:18.944779 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:35:18.944795 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:35:18.944812 kernel: audit: type=2000 audit(1776468919.032:1): state=initialized audit_enabled=0 res=1 Apr 17 23:35:18.944828 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:35:18.944845 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:35:18.944861 kernel: cpuidle: using governor menu Apr 17 23:35:18.944877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:35:18.944894 kernel: dca service started, version 1.12.1 Apr 17 23:35:18.944910 kernel: PCI: Using configuration type 1 for base access Apr 17 23:35:18.944929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:35:18.944946 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:35:18.944962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:35:18.944978 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:35:18.944994 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:35:18.945011 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:35:18.945027 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:35:18.945043 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:35:18.945059 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:35:18.945078 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:35:18.945095 kernel: ACPI: Interpreter enabled Apr 17 23:35:18.945111 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:35:18.945126 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:35:18.945143 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:35:18.945159 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:35:18.945175 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:35:18.945192 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:35:18.945425 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:35:18.945579 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:35:18.945712 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:35:18.945732 kernel: acpiphp: Slot [3] registered Apr 17 23:35:18.945749 kernel: acpiphp: Slot [4] registered Apr 17 23:35:18.945765 kernel: acpiphp: Slot [5] registered Apr 17 23:35:18.945781 kernel: acpiphp: Slot [6] registered Apr 17 23:35:18.945797 kernel: acpiphp: Slot [7] registered Apr 17 23:35:18.945816 kernel: acpiphp: Slot [8] registered Apr 17 23:35:18.945833 kernel: acpiphp: Slot [9] registered Apr 17 23:35:18.945848 kernel: acpiphp: Slot [10] registered Apr 17 23:35:18.945865 kernel: acpiphp: Slot [11] registered Apr 17 23:35:18.945881 kernel: acpiphp: Slot [12] registered Apr 17 23:35:18.945897 kernel: acpiphp: Slot [13] registered Apr 17 23:35:18.945913 kernel: acpiphp: Slot [14] registered Apr 17 23:35:18.945929 kernel: acpiphp: Slot [15] registered Apr 17 23:35:18.945945 kernel: acpiphp: Slot [16] registered Apr 17 23:35:18.945964 kernel: acpiphp: Slot [17] registered Apr 17 23:35:18.945981 kernel: acpiphp: Slot [18] registered Apr 17 23:35:18.945997 kernel: acpiphp: Slot [19] registered Apr 17 23:35:18.946013 kernel: acpiphp: Slot [20] registered Apr 17 23:35:18.946028 kernel: acpiphp: Slot [21] registered Apr 17 23:35:18.946044 kernel: acpiphp: Slot [22] registered Apr 17 23:35:18.946060 kernel: acpiphp: Slot [23] registered Apr 17 23:35:18.946076 kernel: acpiphp: Slot [24] registered Apr 17 23:35:18.946093 kernel: acpiphp: Slot [25] registered Apr 17 23:35:18.946108 kernel: acpiphp: Slot [26] registered Apr 17 23:35:18.946128 kernel: acpiphp: Slot [27] registered Apr 17 23:35:18.946144 kernel: acpiphp: Slot [28] registered Apr 17 23:35:18.946160 kernel: acpiphp: Slot [29] registered Apr 17 23:35:18.946176 kernel: acpiphp: Slot [30] registered Apr 17 23:35:18.946192 kernel: acpiphp: Slot [31] registered Apr 17 23:35:18.946208 kernel: PCI host bridge to bus 0000:00 Apr 17 23:35:18.947274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:35:18.947449 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:35:18.947581 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:35:18.947699 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:35:18.947817 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:35:18.947934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:35:18.948090 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:35:18.948239 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:35:18.948399 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:35:18.948538 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:35:18.948671 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:35:18.950522 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:35:18.950687 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:35:18.950831 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:35:18.950974 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:35:18.951118 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:35:18.951266 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:35:18.951442 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:35:18.951587 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:35:18.951725 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:35:18.951861 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:35:18.952011 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:35:18.952155 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:35:18.952295 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:35:18.954130 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:35:18.954161 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:35:18.954178 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:35:18.954193 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:35:18.954208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:35:18.954237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:35:18.954258 kernel: iommu: Default domain type: Translated Apr 17 23:35:18.954272 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:35:18.954300 kernel: efivars: Registered efivars operations Apr 17 23:35:18.954316 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:35:18.954330 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:35:18.954343 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:35:18.956382 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:35:18.956577 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:35:18.956729 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:35:18.956888 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:35:18.956911 kernel: vgaarb: loaded Apr 17 23:35:18.956929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:35:18.956946 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:35:18.956963 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:35:18.956980 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:35:18.956998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:35:18.957015 kernel: pnp: PnP ACPI init Apr 17 23:35:18.957035 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:35:18.957053 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:35:18.957070 kernel: NET: Registered PF_INET protocol family Apr 17 23:35:18.957087 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:35:18.957104 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:35:18.957121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:35:18.957139 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:35:18.957156 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:35:18.957173 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:35:18.957193 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:35:18.957210 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:35:18.957227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:35:18.957244 kernel: NET: Registered PF_XDP protocol family Apr 17 23:35:18.957391 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:35:18.957519 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:35:18.957643 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:35:18.957767 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:35:18.957891 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:35:18.958042 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:35:18.958064 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:35:18.958082 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:35:18.958099 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:35:18.958116 kernel: clocksource: Switched to clocksource tsc Apr 17 23:35:18.958133 kernel: Initialise system trusted keyrings Apr 17 23:35:18.958150 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:35:18.958167 kernel: Key type asymmetric registered Apr 17 23:35:18.958187 kernel: Asymmetric key parser 'x509' registered Apr 17 23:35:18.958203 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:35:18.958228 kernel: io scheduler mq-deadline registered Apr 17 23:35:18.958245 kernel: io scheduler kyber registered Apr 17 23:35:18.958262 kernel: io scheduler bfq registered Apr 17 23:35:18.958278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:35:18.958295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:35:18.958312 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:35:18.958329 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:35:18.958349 kernel: i8042: Warning: Keylock active Apr 17 23:35:18.958988 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:35:18.959007 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:35:18.959182 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:35:18.959318 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:35:18.959473 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:35:18 UTC (1776468918) Apr 17 23:35:18.959603 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:35:18.959624 kernel: intel_pstate: CPU model not supported Apr 17 23:35:18.959646 kernel: efifb: probing for efifb Apr 17 23:35:18.959662 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:35:18.959679 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:35:18.959697 kernel: efifb: scrolling: redraw Apr 17 23:35:18.959714 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:35:18.959731 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:35:18.959748 kernel: fb0: EFI VGA frame buffer device Apr 17 23:35:18.959764 kernel: pstore: Using crash dump compression: deflate Apr 17 23:35:18.959781 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:35:18.959802 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:35:18.959818 kernel: Segment Routing with IPv6 Apr 17 23:35:18.959835 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:35:18.959852 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:35:18.959869 kernel: Key type dns_resolver registered Apr 17 23:35:18.959886 kernel: IPI shorthand broadcast: enabled Apr 17 23:35:18.959928 kernel: sched_clock: Marking stable (469002099, 136556431)->(679365043, -73806513) Apr 17 23:35:18.959949 kernel: registered taskstats version 1 Apr 17 23:35:18.959967 kernel: Loading compiled-in X.509 certificates Apr 17 23:35:18.959987 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:35:18.960004 kernel: Key type .fscrypt registered Apr 17 23:35:18.960021 kernel: Key type fscrypt-provisioning registered Apr 17 23:35:18.960039 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:35:18.960056 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:35:18.960074 kernel: ima: No architecture policies found Apr 17 23:35:18.960091 kernel: clk: Disabling unused clocks Apr 17 23:35:18.960111 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:35:18.960129 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:35:18.960149 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:35:18.960167 kernel: Run /init as init process Apr 17 23:35:18.960184 kernel: with arguments: Apr 17 23:35:18.960201 kernel: /init Apr 17 23:35:18.960218 kernel: with environment: Apr 17 23:35:18.960235 kernel: HOME=/ Apr 17 23:35:18.960252 kernel: TERM=linux Apr 17 23:35:18.960272 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:35:18.960297 systemd[1]: Detected virtualization amazon. Apr 17 23:35:18.960316 systemd[1]: Detected architecture x86-64. Apr 17 23:35:18.960333 systemd[1]: Running in initrd. Apr 17 23:35:18.960351 systemd[1]: No hostname configured, using default hostname. Apr 17 23:35:18.962395 systemd[1]: Hostname set to . Apr 17 23:35:18.962420 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:35:18.962436 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:35:18.962452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:18.962473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:18.962491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:35:18.962507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:35:18.962523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:35:18.962542 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:35:18.962564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:35:18.962580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:35:18.962616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:18.962633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:18.962650 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:35:18.962667 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:35:18.962685 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:35:18.962707 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:35:18.962724 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:35:18.962742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:35:18.962761 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:35:18.962779 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:35:18.962797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:18.962815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:18.962833 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:18.962851 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:35:18.962873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:35:18.962891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:35:18.962909 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:35:18.962927 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:35:18.962945 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:35:18.962963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:35:18.962982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:18.963031 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:35:18.963075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:35:18.963095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:18.963113 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:35:18.963136 systemd-journald[179]: Journal started Apr 17 23:35:18.963172 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2f120c7dfc92e54b6902ea53183aea) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:35:18.948981 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:35:18.970055 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:35:18.974825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:18.985595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:18.990553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:35:18.997110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:35:18.997409 kernel: Bridge firewalling registered Apr 17 23:35:18.999086 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:35:19.007458 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:35:19.007703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:35:19.008678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:19.017858 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:35:19.021347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:19.026561 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:35:19.028905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:35:19.038566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:35:19.042911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:19.052987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:19.061983 dracut-cmdline[206]: dracut-dracut-053 Apr 17 23:35:19.064060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:19.066536 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:19.076175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:35:19.124179 systemd-resolved[228]: Positive Trust Anchors: Apr 17 23:35:19.125174 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:35:19.125237 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:35:19.134570 systemd-resolved[228]: Defaulting to hostname 'linux'. Apr 17 23:35:19.135961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:35:19.136725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:19.159396 kernel: SCSI subsystem initialized Apr 17 23:35:19.169387 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:35:19.181390 kernel: iscsi: registered transport (tcp) Apr 17 23:35:19.202748 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:35:19.202835 kernel: QLogic iSCSI HBA Driver Apr 17 23:35:19.241645 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:35:19.249520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:35:19.274527 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:35:19.274603 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:35:19.275664 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:35:19.318395 kernel: raid6: avx512x4 gen() 17975 MB/s Apr 17 23:35:19.336382 kernel: raid6: avx512x2 gen() 17876 MB/s Apr 17 23:35:19.354387 kernel: raid6: avx512x1 gen() 17862 MB/s Apr 17 23:35:19.372380 kernel: raid6: avx2x4 gen() 17783 MB/s Apr 17 23:35:19.390384 kernel: raid6: avx2x2 gen() 17793 MB/s Apr 17 23:35:19.408611 kernel: raid6: avx2x1 gen() 13552 MB/s Apr 17 23:35:19.408664 kernel: raid6: using algorithm avx512x4 gen() 17975 MB/s Apr 17 23:35:19.427574 kernel: raid6: .... xor() 7788 MB/s, rmw enabled Apr 17 23:35:19.427629 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:35:19.449385 kernel: xor: automatically using best checksumming function avx Apr 17 23:35:19.608394 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:35:19.619116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:35:19.623575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:19.647055 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 17 23:35:19.652205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:19.659559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:35:19.680945 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 17 23:35:19.711927 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:35:19.716575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:35:19.768930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:19.776629 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:35:19.804954 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:35:19.808113 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:35:19.810220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:19.811319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:35:19.818618 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:35:19.849988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:35:19.875401 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:35:19.883505 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:35:19.883815 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:35:19.890397 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:35:19.897300 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:35:19.897574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:19.908731 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:39:f8:44:6f:9f Apr 17 23:35:19.902604 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:19.910028 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:19.912445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:19.912760 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:19.913397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:19.928392 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:35:19.928453 kernel: AES CTR mode by8 optimization enabled Apr 17 23:35:19.926818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:19.934499 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:35:19.934736 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:35:19.933889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:19.935494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:19.946794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:19.951701 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:35:19.959763 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:35:19.959821 kernel: GPT:9289727 != 33554431 Apr 17 23:35:19.959849 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:35:19.959869 kernel: GPT:9289727 != 33554431 Apr 17 23:35:19.959887 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:35:19.959907 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:19.975638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:19.986858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:20.003730 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:20.078990 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (456) Apr 17 23:35:20.085391 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (448) Apr 17 23:35:20.103023 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:35:20.151276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:35:20.160187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:35:20.166078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:35:20.166706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:35:20.183619 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:35:20.191191 disk-uuid[630]: Primary Header is updated. Apr 17 23:35:20.191191 disk-uuid[630]: Secondary Entries is updated. Apr 17 23:35:20.191191 disk-uuid[630]: Secondary Header is updated. Apr 17 23:35:20.198381 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:20.207395 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:20.214387 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:21.221388 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:21.223234 disk-uuid[631]: The operation has completed successfully. Apr 17 23:35:21.363591 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:35:21.363728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:35:21.386558 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:35:21.390733 sh[972]: Success Apr 17 23:35:21.411412 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:35:21.530191 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:35:21.538506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:35:21.540751 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:35:21.578579 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:35:21.578650 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:21.578672 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:35:21.580603 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:35:21.581933 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:35:21.716383 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:35:21.749768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:35:21.751074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:35:21.757564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:35:21.760566 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:35:21.783851 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:21.783923 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:21.785875 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:21.802381 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:21.818414 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:21.818270 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:35:21.826111 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:35:21.833586 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:35:21.868041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:35:21.876655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:35:21.897332 systemd-networkd[1164]: lo: Link UP Apr 17 23:35:21.897344 systemd-networkd[1164]: lo: Gained carrier Apr 17 23:35:21.899513 systemd-networkd[1164]: Enumeration completed Apr 17 23:35:21.899971 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:21.899976 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:35:21.901476 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:35:21.902812 systemd[1]: Reached target network.target - Network. Apr 17 23:35:21.904293 systemd-networkd[1164]: eth0: Link UP Apr 17 23:35:21.904299 systemd-networkd[1164]: eth0: Gained carrier Apr 17 23:35:21.904315 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:21.914451 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.25.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:35:22.304117 ignition[1109]: Ignition 2.19.0 Apr 17 23:35:22.304132 ignition[1109]: Stage: fetch-offline Apr 17 23:35:22.304401 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:22.304416 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:22.305100 ignition[1109]: Ignition finished successfully Apr 17 23:35:22.307553 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:35:22.313562 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:35:22.327548 ignition[1172]: Ignition 2.19.0 Apr 17 23:35:22.327559 ignition[1172]: Stage: fetch Apr 17 23:35:22.328011 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:22.328025 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:22.328143 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:22.360958 ignition[1172]: PUT result: OK Apr 17 23:35:22.363537 ignition[1172]: parsed url from cmdline: "" Apr 17 23:35:22.363548 ignition[1172]: no config URL provided Apr 17 23:35:22.363558 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:35:22.363573 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:35:22.363607 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:22.366185 ignition[1172]: PUT result: OK Apr 17 23:35:22.366319 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:35:22.370555 ignition[1172]: GET result: OK Apr 17 23:35:22.370709 ignition[1172]: parsing config with SHA512: b0d9aefcffeef03e5ab1885ea7322e212cfba73cede0a07cac4b014b77bd0593933ad9fb7b76c5140a69ae8c19020fd54e5b158d89a7a355fb3177a36464de6d Apr 17 23:35:22.376201 unknown[1172]: fetched base config from "system" Apr 17 23:35:22.376227 unknown[1172]: fetched base config from "system" Apr 17 23:35:22.376238 unknown[1172]: fetched user config from "aws" Apr 17 23:35:22.379709 ignition[1172]: fetch: fetch complete Apr 17 23:35:22.379724 ignition[1172]: fetch: fetch passed Apr 17 23:35:22.379811 ignition[1172]: Ignition finished successfully Apr 17 23:35:22.381820 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:35:22.386571 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:35:22.403519 ignition[1179]: Ignition 2.19.0 Apr 17 23:35:22.403533 ignition[1179]: Stage: kargs Apr 17 23:35:22.404016 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:22.404031 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:22.404150 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:22.405611 ignition[1179]: PUT result: OK Apr 17 23:35:22.408617 ignition[1179]: kargs: kargs passed Apr 17 23:35:22.408693 ignition[1179]: Ignition finished successfully Apr 17 23:35:22.410541 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:35:22.415607 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:35:22.430944 ignition[1185]: Ignition 2.19.0 Apr 17 23:35:22.430958 ignition[1185]: Stage: disks Apr 17 23:35:22.431448 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:22.431462 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:22.431579 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:22.433886 ignition[1185]: PUT result: OK Apr 17 23:35:22.436818 ignition[1185]: disks: disks passed Apr 17 23:35:22.436898 ignition[1185]: Ignition finished successfully Apr 17 23:35:22.438175 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:35:22.439209 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:35:22.439640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:35:22.440159 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:35:22.440755 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:35:22.441300 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:35:22.448559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:35:22.487305 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:35:22.490880 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:35:22.495524 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:35:22.601388 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:35:22.602044 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:35:22.603171 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:35:22.615500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:35:22.619511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:35:22.620677 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:35:22.620747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:35:22.620782 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:35:22.627729 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:35:22.633548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:35:22.645374 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1212) Apr 17 23:35:22.650398 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:22.650474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:22.650494 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:22.664385 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:22.666405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:35:23.059433 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:35:23.077657 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:35:23.082989 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:35:23.087585 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:35:23.364083 systemd-networkd[1164]: eth0: Gained IPv6LL Apr 17 23:35:23.404179 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:35:23.409507 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:35:23.412568 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:35:23.422117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:35:23.424411 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:23.462407 ignition[1324]: INFO : Ignition 2.19.0 Apr 17 23:35:23.462407 ignition[1324]: INFO : Stage: mount Apr 17 23:35:23.465001 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:23.465001 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:23.465001 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:23.467875 ignition[1324]: INFO : PUT result: OK Apr 17 23:35:23.469546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:35:23.471376 ignition[1324]: INFO : mount: mount passed Apr 17 23:35:23.471931 ignition[1324]: INFO : Ignition finished successfully Apr 17 23:35:23.473178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:35:23.478486 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:35:23.607620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:35:23.628389 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1336) Apr 17 23:35:23.631548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:23.631625 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:23.634040 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:23.640756 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:23.642442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:35:23.667013 ignition[1352]: INFO : Ignition 2.19.0 Apr 17 23:35:23.668375 ignition[1352]: INFO : Stage: files Apr 17 23:35:23.668375 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:23.668375 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:23.668375 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:23.670318 ignition[1352]: INFO : PUT result: OK Apr 17 23:35:23.672020 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:35:23.673223 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:35:23.673223 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:35:23.711485 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:35:23.712436 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:35:23.713162 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:35:23.712613 unknown[1352]: wrote ssh authorized keys file for user: core Apr 17 23:35:23.715027 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:35:23.715865 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:35:23.715865 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:35:23.715865 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:35:23.812896 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:35:23.964743 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:35:23.964743 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:35:23.966691 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:35:24.253698 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 17 23:35:24.493887 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:35:24.495678 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:35:24.502460 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:35:24.502460 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:35:24.502460 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:35:24.502460 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:35:24.502460 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:35:24.963342 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 17 23:35:26.713862 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:35:26.713862 ignition[1352]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:35:26.716545 ignition[1352]: INFO : files: files passed Apr 17 23:35:26.716545 ignition[1352]: INFO : Ignition finished successfully Apr 17 23:35:26.717924 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:35:26.725621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:35:26.731544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:35:26.739725 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:35:26.739870 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:35:26.749863 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:26.749863 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:26.753596 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:26.755460 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:35:26.756136 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:35:26.760558 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:35:26.789487 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:35:26.789618 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:35:26.791178 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:35:26.792032 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:35:26.792815 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:35:26.797535 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:35:26.811614 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:35:26.815564 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:35:26.829179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:26.829859 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:26.830890 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:35:26.831684 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:35:26.831863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:35:26.832959 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:35:26.833795 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:35:26.834623 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:35:26.835385 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:35:26.836142 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:35:26.836923 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:35:26.837690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:35:26.838517 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:35:26.839671 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:35:26.840432 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:35:26.841135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:35:26.841314 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:35:26.842444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:26.843233 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:26.843925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:35:26.844684 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:26.845913 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:35:26.846103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:35:26.847307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:35:26.847513 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:35:26.848298 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:35:26.848474 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:35:26.854607 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:35:26.857616 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:35:26.860548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:35:26.861470 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:26.862860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:35:26.863580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:35:26.873215 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:35:26.874281 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:35:26.876922 ignition[1406]: INFO : Ignition 2.19.0 Apr 17 23:35:26.876922 ignition[1406]: INFO : Stage: umount Apr 17 23:35:26.878285 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:26.878285 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:26.878285 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:26.880086 ignition[1406]: INFO : PUT result: OK Apr 17 23:35:26.883497 ignition[1406]: INFO : umount: umount passed Apr 17 23:35:26.884239 ignition[1406]: INFO : Ignition finished successfully Apr 17 23:35:26.885903 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:35:26.886067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:35:26.886868 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:35:26.886932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:35:26.887528 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:35:26.887586 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:35:26.889522 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:35:26.889575 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:35:26.890068 systemd[1]: Stopped target network.target - Network. Apr 17 23:35:26.892429 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:35:26.892499 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:35:26.893010 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:35:26.894424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:35:26.894484 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:26.895234 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:35:26.895709 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:35:26.896174 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:35:26.896226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:35:26.898466 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:35:26.898517 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:35:26.899472 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:35:26.899537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:35:26.900000 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:35:26.900049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:35:26.900508 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:35:26.901524 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:35:26.903770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:35:26.905457 systemd-networkd[1164]: eth0: DHCPv6 lease lost Apr 17 23:35:26.907672 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:35:26.907811 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:35:26.910802 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:35:26.910960 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:35:26.913851 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:35:26.913990 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:35:26.915968 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:35:26.916030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:26.916769 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:35:26.916830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:35:26.922466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:35:26.922997 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:35:26.923072 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:35:26.923656 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:35:26.923711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:26.924235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:35:26.924291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:26.924885 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:35:26.924941 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:26.925640 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:26.943038 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:35:26.943237 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:26.945882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:35:26.945974 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:26.947480 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:35:26.947531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:26.948180 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:35:26.948242 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:35:26.949354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:35:26.949426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:35:26.950563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:35:26.950622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:26.959526 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:35:26.960073 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:35:26.960152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:26.963006 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:35:26.963074 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:35:26.963727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:35:26.963790 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:26.964348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:26.965112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:26.967369 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:35:26.967512 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:35:26.969610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:35:26.969742 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:35:26.971058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:35:26.979605 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:35:27.008271 systemd[1]: Switching root. Apr 17 23:35:27.032425 systemd-journald[179]: Journal stopped Apr 17 23:35:29.313629 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:35:29.313720 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:35:29.313744 kernel: SELinux: policy capability open_perms=1 Apr 17 23:35:29.313764 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:35:29.313786 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:35:29.313810 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:35:29.313829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:35:29.313850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:35:29.313874 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:35:29.313896 kernel: audit: type=1403 audit(1776468927.984:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:35:29.313918 systemd[1]: Successfully loaded SELinux policy in 53.361ms. Apr 17 23:35:29.313951 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.403ms. Apr 17 23:35:29.313975 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:35:29.313998 systemd[1]: Detected virtualization amazon. Apr 17 23:35:29.314022 systemd[1]: Detected architecture x86-64. Apr 17 23:35:29.314044 systemd[1]: Detected first boot. Apr 17 23:35:29.314065 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:35:29.314086 zram_generator::config[1465]: No configuration found. Apr 17 23:35:29.314114 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:35:29.314135 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:35:29.314166 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:35:29.314188 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:35:29.314213 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:35:29.314238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:35:29.314259 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:35:29.314285 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:35:29.314309 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:35:29.314331 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:35:29.314351 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:35:29.314392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:29.314416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:29.314441 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:35:29.314462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:35:29.314483 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:35:29.314505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:35:29.314526 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:35:29.314548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:29.314567 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:35:29.314590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:29.314615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:35:29.314639 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:35:29.314659 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:35:29.314679 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:35:29.314698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:35:29.314720 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:35:29.314740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:35:29.314759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:29.314779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:29.314801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:29.314820 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:35:29.314839 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:35:29.314859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:35:29.314878 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:35:29.314898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:29.314918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:35:29.314938 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:35:29.314961 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:35:29.314980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:35:29.315000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:29.315020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:35:29.315039 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:35:29.315059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:29.315078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:35:29.315099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:29.315120 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:35:29.315142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:29.315163 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:35:29.315184 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:35:29.315205 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:35:29.315223 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:35:29.315242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:35:29.315261 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:35:29.315284 kernel: loop: module loaded Apr 17 23:35:29.315304 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:35:29.315325 kernel: fuse: init (API version 7.39) Apr 17 23:35:29.315344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:35:29.317413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:29.317449 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:35:29.317470 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:35:29.317490 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:35:29.317511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:35:29.317532 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:35:29.317556 kernel: ACPI: bus type drm_connector registered Apr 17 23:35:29.317576 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:35:29.317596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:29.317616 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:35:29.317637 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:35:29.317657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:29.317677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:29.317695 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:35:29.317717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:35:29.317779 systemd-journald[1565]: Collecting audit messages is disabled. Apr 17 23:35:29.317814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:29.317835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:29.317860 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:35:29.317882 systemd-journald[1565]: Journal started Apr 17 23:35:29.317919 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec2f120c7dfc92e54b6902ea53183aea) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:35:29.321508 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:35:29.323802 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:35:29.324082 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:35:29.325278 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:29.327635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:29.328992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:29.332223 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:35:29.335952 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:35:29.354888 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:35:29.361504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:35:29.370476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:35:29.371564 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:35:29.383663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:35:29.394762 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:35:29.398471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:35:29.411565 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:35:29.412461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:35:29.416550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:35:29.426629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:35:29.436485 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec2f120c7dfc92e54b6902ea53183aea is 41.633ms for 974 entries. Apr 17 23:35:29.436485 systemd-journald[1565]: System Journal (/var/log/journal/ec2f120c7dfc92e54b6902ea53183aea) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:35:29.484858 systemd-journald[1565]: Received client request to flush runtime journal. Apr 17 23:35:29.435476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:29.442191 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:35:29.447013 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:35:29.449546 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:35:29.457918 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:35:29.470525 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:35:29.492880 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:35:29.517521 udevadm[1625]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:35:29.533998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:29.615097 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Apr 17 23:35:29.615127 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Apr 17 23:35:29.623927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:35:29.634698 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:35:29.683910 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:35:29.694562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:35:29.711959 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Apr 17 23:35:29.711989 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Apr 17 23:35:29.717490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:30.208998 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:35:30.215589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:30.242948 systemd-udevd[1645]: Using default interface naming scheme 'v255'. Apr 17 23:35:30.313563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:30.323515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:35:30.352869 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:35:30.382463 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:35:30.419710 (udev-worker)[1648]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:30.459210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:35:30.518881 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:35:30.568392 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:35:30.574274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 17 23:35:30.574338 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:35:30.593426 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:35:30.593508 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 17 23:35:30.595400 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:35:30.596408 systemd-networkd[1649]: lo: Link UP Apr 17 23:35:30.597899 systemd-networkd[1649]: lo: Gained carrier Apr 17 23:35:30.600542 systemd-networkd[1649]: Enumeration completed Apr 17 23:35:30.600803 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:35:30.601243 systemd-networkd[1649]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:30.601326 systemd-networkd[1649]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:35:30.604004 systemd-networkd[1649]: eth0: Link UP Apr 17 23:35:30.604295 systemd-networkd[1649]: eth0: Gained carrier Apr 17 23:35:30.604415 systemd-networkd[1649]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:30.609776 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:35:30.616480 systemd-networkd[1649]: eth0: DHCPv4 address 172.31.25.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:35:30.620270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:30.629987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:30.630327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:30.641558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:30.694416 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1655) Apr 17 23:35:30.783286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:30.844326 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:35:30.852608 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:35:30.857544 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:35:30.900476 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:35:30.928650 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:35:30.930318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:30.937575 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:35:30.942848 lvm[1775]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:35:30.970707 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:35:30.972306 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:35:30.972980 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:35:30.973020 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:35:30.973626 systemd[1]: Reached target machines.target - Containers. Apr 17 23:35:30.975785 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:35:30.986586 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:35:30.988748 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:35:30.989670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:30.992574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:35:31.005750 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:35:31.008545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:35:31.012687 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:35:31.028412 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:35:31.035425 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 23:35:31.074536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:35:31.075603 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:35:31.255383 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:35:31.297390 kernel: loop1: detected capacity change from 0 to 142488 Apr 17 23:35:31.423385 kernel: loop2: detected capacity change from 0 to 61336 Apr 17 23:35:31.564387 kernel: loop3: detected capacity change from 0 to 140768 Apr 17 23:35:31.678385 kernel: loop4: detected capacity change from 0 to 228704 Apr 17 23:35:31.712397 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:35:31.736393 kernel: loop6: detected capacity change from 0 to 61336 Apr 17 23:35:31.753380 kernel: loop7: detected capacity change from 0 to 140768 Apr 17 23:35:31.773238 (sd-merge)[1796]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:35:31.773897 (sd-merge)[1796]: Merged extensions into '/usr'. Apr 17 23:35:31.777996 systemd[1]: Reloading requested from client PID 1783 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:35:31.778013 systemd[1]: Reloading... Apr 17 23:35:31.845450 zram_generator::config[1824]: No configuration found. Apr 17 23:35:32.005055 systemd-networkd[1649]: eth0: Gained IPv6LL Apr 17 23:35:32.015660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:32.102740 systemd[1]: Reloading finished in 324 ms. Apr 17 23:35:32.128206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:35:32.132204 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:35:32.141693 systemd[1]: Starting ensure-sysext.service... Apr 17 23:35:32.145554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:35:32.151018 systemd[1]: Reloading requested from client PID 1883 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:35:32.151173 systemd[1]: Reloading... Apr 17 23:35:32.183882 systemd-tmpfiles[1884]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:35:32.184921 systemd-tmpfiles[1884]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:35:32.186494 systemd-tmpfiles[1884]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:35:32.187055 systemd-tmpfiles[1884]: ACLs are not supported, ignoring. Apr 17 23:35:32.187219 systemd-tmpfiles[1884]: ACLs are not supported, ignoring. Apr 17 23:35:32.191851 systemd-tmpfiles[1884]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:35:32.193816 systemd-tmpfiles[1884]: Skipping /boot Apr 17 23:35:32.214189 systemd-tmpfiles[1884]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:35:32.214382 systemd-tmpfiles[1884]: Skipping /boot Apr 17 23:35:32.245386 zram_generator::config[1909]: No configuration found. Apr 17 23:35:32.432256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:32.512848 systemd[1]: Reloading finished in 361 ms. Apr 17 23:35:32.539879 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:32.557651 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:32.573587 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:35:32.575942 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:35:32.589144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:35:32.602608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:35:32.614556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:32.614761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:32.619755 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:32.632708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:32.641153 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:32.642879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:32.643537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:32.648824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:32.649085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:32.654536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:32.654777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:32.670102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:35:32.681168 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:32.682608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:32.685886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:35:32.699144 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:32.699557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:32.708214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:32.719908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:35:32.737467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:32.739427 augenrules[2007]: No rules Apr 17 23:35:32.761508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:32.763611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:32.764201 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:35:32.767495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:32.771881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:32.775141 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:35:32.776303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:32.777195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:32.780131 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:35:32.782331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:35:32.788996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:32.789218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:32.791487 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:32.792195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:32.806218 systemd[1]: Finished ensure-sysext.service. Apr 17 23:35:32.822284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:35:32.822410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:35:32.831744 systemd-resolved[1976]: Positive Trust Anchors: Apr 17 23:35:32.831764 systemd-resolved[1976]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:35:32.831810 systemd-resolved[1976]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:35:32.837714 systemd-resolved[1976]: Defaulting to hostname 'linux'. Apr 17 23:35:32.839912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:35:32.840966 systemd[1]: Reached target network.target - Network. Apr 17 23:35:32.841556 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:35:32.842317 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:32.849894 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:35:32.850614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:35:32.879409 ldconfig[1779]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:35:32.886500 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:35:32.895728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:35:32.907188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:35:32.908984 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:35:32.909874 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:35:32.910548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:35:32.911173 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:35:32.911648 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:35:32.912001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:35:32.912351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:35:32.912601 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:35:32.912931 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:35:32.914011 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:35:32.915929 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:35:32.917509 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:35:32.919398 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:35:32.919892 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:35:32.920351 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:35:32.921034 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:35:32.921084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:35:32.921117 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:35:32.924454 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:35:32.929528 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:35:32.933212 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:35:32.948509 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:35:32.951467 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:35:32.952162 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:35:32.957947 jq[2043]: false Apr 17 23:35:32.965506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:32.969555 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:35:32.975739 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:35:33.004266 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:35:33.019354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:35:33.022062 dbus-daemon[2041]: [system] SELinux support is enabled Apr 17 23:35:33.036025 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:35:33.040583 extend-filesystems[2044]: Found loop4 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found loop5 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found loop6 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found loop7 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p1 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p2 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p3 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found usr Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p4 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p6 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p7 Apr 17 23:35:33.040583 extend-filesystems[2044]: Found nvme0n1p9 Apr 17 23:35:33.040583 extend-filesystems[2044]: Checking size of /dev/nvme0n1p9 Apr 17 23:35:33.045596 dbus-daemon[2041]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1649 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:35:33.043061 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:35:33.136784 extend-filesystems[2044]: Resized partition /dev/nvme0n1p9 Apr 17 23:35:33.058065 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:35:33.079737 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:35:33.089386 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:35:33.108727 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:35:33.118510 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:35:33.143344 extend-filesystems[2082]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:35:33.152279 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:35:33.122669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:35:33.152245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:35:33.152627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:35:33.170846 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:35:33.171197 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:35:33.179065 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:35:33.184279 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:35:33.198109 ntpd[2049]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:35:33.231644 jq[2077]: true Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: ---------------------------------------------------- Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: corporation. Support and training for ntp-4 are Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: available at https://www.nwtime.org/support Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: ---------------------------------------------------- Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: proto: precision = 0.064 usec (-24) Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: basedate set to 2026-04-05 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: gps base set to 2026-04-05 (week 2413) Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen normally on 3 eth0 172.31.25.61:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen normally on 4 lo [::1]:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listen normally on 5 eth0 [fe80::439:f8ff:fe44:6f9f%2]:123 Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: Listening on routing socket on fd #22 for interface updates Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:33.244579 ntpd[2049]: 17 Apr 23:35:33 ntpd[2049]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:33.255751 update_engine[2069]: I20260417 23:35:33.219774 2069 main.cc:92] Flatcar Update Engine starting Apr 17 23:35:33.255751 update_engine[2069]: I20260417 23:35:33.221530 2069 update_check_scheduler.cc:74] Next update check in 8m57s Apr 17 23:35:33.198168 ntpd[2049]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:35:33.242653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:35:33.198179 ntpd[2049]: ---------------------------------------------------- Apr 17 23:35:33.250753 (ntainerd)[2094]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:35:33.198190 ntpd[2049]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:35:33.198199 ntpd[2049]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:35:33.272628 jq[2098]: true Apr 17 23:35:33.198209 ntpd[2049]: corporation. Support and training for ntp-4 are Apr 17 23:35:33.198218 ntpd[2049]: available at https://www.nwtime.org/support Apr 17 23:35:33.198227 ntpd[2049]: ---------------------------------------------------- Apr 17 23:35:33.204716 ntpd[2049]: proto: precision = 0.064 usec (-24) Apr 17 23:35:33.207580 ntpd[2049]: basedate set to 2026-04-05 Apr 17 23:35:33.207602 ntpd[2049]: gps base set to 2026-04-05 (week 2413) Apr 17 23:35:33.212376 ntpd[2049]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:35:33.212432 ntpd[2049]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:35:33.212618 ntpd[2049]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:35:33.212656 ntpd[2049]: Listen normally on 3 eth0 172.31.25.61:123 Apr 17 23:35:33.212696 ntpd[2049]: Listen normally on 4 lo [::1]:123 Apr 17 23:35:33.212741 ntpd[2049]: Listen normally on 5 eth0 [fe80::439:f8ff:fe44:6f9f%2]:123 Apr 17 23:35:33.212780 ntpd[2049]: Listening on routing socket on fd #22 for interface updates Apr 17 23:35:33.229090 ntpd[2049]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:33.229121 ntpd[2049]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:33.280728 coreos-metadata[2040]: Apr 17 23:35:33.278 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:35:33.280728 coreos-metadata[2040]: Apr 17 23:35:33.280 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:35:33.284660 coreos-metadata[2040]: Apr 17 23:35:33.281 INFO Fetch successful Apr 17 23:35:33.284660 coreos-metadata[2040]: Apr 17 23:35:33.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:35:33.284660 coreos-metadata[2040]: Apr 17 23:35:33.281 INFO Fetch successful Apr 17 23:35:33.284660 coreos-metadata[2040]: Apr 17 23:35:33.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:35:33.286159 coreos-metadata[2040]: Apr 17 23:35:33.285 INFO Fetch successful Apr 17 23:35:33.286159 coreos-metadata[2040]: Apr 17 23:35:33.285 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:35:33.296714 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:35:33.296768 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:35:33.297375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:35:33.297405 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:35:33.301281 coreos-metadata[2040]: Apr 17 23:35:33.300 INFO Fetch successful Apr 17 23:35:33.301281 coreos-metadata[2040]: Apr 17 23:35:33.300 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:35:33.307450 tar[2086]: linux-amd64/LICENSE Apr 17 23:35:33.307450 tar[2086]: linux-amd64/helm Apr 17 23:35:33.303466 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:35:33.303324 dbus-daemon[2041]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:35:33.304941 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:35:33.310657 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:35:33.320697 coreos-metadata[2040]: Apr 17 23:35:33.319 INFO Fetch failed with 404: resource not found Apr 17 23:35:33.320697 coreos-metadata[2040]: Apr 17 23:35:33.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:35:33.324675 coreos-metadata[2040]: Apr 17 23:35:33.324 INFO Fetch successful Apr 17 23:35:33.325644 coreos-metadata[2040]: Apr 17 23:35:33.324 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:35:33.325644 coreos-metadata[2040]: Apr 17 23:35:33.325 INFO Fetch successful Apr 17 23:35:33.325644 coreos-metadata[2040]: Apr 17 23:35:33.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:35:33.330234 coreos-metadata[2040]: Apr 17 23:35:33.327 INFO Fetch successful Apr 17 23:35:33.330234 coreos-metadata[2040]: Apr 17 23:35:33.327 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:35:33.335500 coreos-metadata[2040]: Apr 17 23:35:33.331 INFO Fetch successful Apr 17 23:35:33.335500 coreos-metadata[2040]: Apr 17 23:35:33.331 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:35:33.335500 coreos-metadata[2040]: Apr 17 23:35:33.333 INFO Fetch successful Apr 17 23:35:33.350405 sshd_keygen[2068]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:35:33.366263 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:35:33.376576 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:35:33.396599 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:35:33.425891 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:35:33.428126 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:35:33.429824 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:35:33.464001 extend-filesystems[2082]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:35:33.464001 extend-filesystems[2082]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:35:33.464001 extend-filesystems[2082]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:35:33.477874 extend-filesystems[2044]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:35:33.465308 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:35:33.465689 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:35:33.555769 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (2167) Apr 17 23:35:33.577226 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:35:33.578576 bash[2164]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:35:33.583469 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:35:33.594840 systemd-logind[2063]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 23:35:33.597315 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:35:33.605519 systemd-logind[2063]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 17 23:35:33.605548 systemd-logind[2063]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:35:33.607654 systemd-logind[2063]: New seat seat0. Apr 17 23:35:33.616800 systemd[1]: Starting sshkeys.service... Apr 17 23:35:33.638556 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:35:33.654296 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:35:33.667682 systemd[1]: Started sshd@0-172.31.25.61:22-20.229.252.112:37016.service - OpenSSH per-connection server daemon (20.229.252.112:37016). Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: Initializing new seelog logger Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: New Seelog Logger Creation Complete Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 processing appconfig overrides Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 processing appconfig overrides Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 processing appconfig overrides Apr 17 23:35:33.676502 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO Proxy environment variables: Apr 17 23:35:33.718107 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.718107 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:33.718107 amazon-ssm-agent[2136]: 2026/04/17 23:35:33 processing appconfig overrides Apr 17 23:35:33.723200 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:35:33.723598 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:35:33.746486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:35:33.763252 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:35:33.770954 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:35:33.794325 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO https_proxy: Apr 17 23:35:33.851928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:35:33.867937 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:35:33.880292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:35:33.884532 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:35:33.895784 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO http_proxy: Apr 17 23:35:34.000227 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO no_proxy: Apr 17 23:35:34.022831 coreos-metadata[2213]: Apr 17 23:35:34.022 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:35:34.028194 coreos-metadata[2213]: Apr 17 23:35:34.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:35:34.034601 coreos-metadata[2213]: Apr 17 23:35:34.034 INFO Fetch successful Apr 17 23:35:34.034708 coreos-metadata[2213]: Apr 17 23:35:34.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:35:34.034708 coreos-metadata[2213]: Apr 17 23:35:34.034 INFO Fetch successful Apr 17 23:35:34.043930 unknown[2213]: wrote ssh authorized keys file for user: core Apr 17 23:35:34.049116 locksmithd[2123]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:35:34.070905 dbus-daemon[2041]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:35:34.071685 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:35:34.076674 dbus-daemon[2041]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2138 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:35:34.085382 update-ssh-keys[2282]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:35:34.087862 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:35:34.092592 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:35:34.099192 systemd[1]: Finished sshkeys.service. Apr 17 23:35:34.104377 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:35:34.144927 polkitd[2288]: Started polkitd version 121 Apr 17 23:35:34.175758 polkitd[2288]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:35:34.185599 polkitd[2288]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:35:34.189203 polkitd[2288]: Finished loading, compiling and executing 2 rules Apr 17 23:35:34.190991 dbus-daemon[2041]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:35:34.192467 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:35:34.192573 polkitd[2288]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:35:34.211435 amazon-ssm-agent[2136]: 2026-04-17 23:35:33 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:35:34.253638 systemd-hostnamed[2138]: Hostname set to (transient) Apr 17 23:35:34.253710 systemd-resolved[1976]: System hostname changed to 'ip-172-31-25-61'. Apr 17 23:35:34.256187 containerd[2094]: time="2026-04-17T23:35:34.255481537Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:35:34.305754 containerd[2094]: time="2026-04-17T23:35:34.305408864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.307618 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO Agent will take identity from EC2 Apr 17 23:35:34.307703 containerd[2094]: time="2026-04-17T23:35:34.307519594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:34.307703 containerd[2094]: time="2026-04-17T23:35:34.307647067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:35:34.307703 containerd[2094]: time="2026-04-17T23:35:34.307673675Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:35:34.307878 containerd[2094]: time="2026-04-17T23:35:34.307853301Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:35:34.307925 containerd[2094]: time="2026-04-17T23:35:34.307884637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.307986 containerd[2094]: time="2026-04-17T23:35:34.307964631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308037 containerd[2094]: time="2026-04-17T23:35:34.307989109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308346 containerd[2094]: time="2026-04-17T23:35:34.308318755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308426 containerd[2094]: time="2026-04-17T23:35:34.308348873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308426 containerd[2094]: time="2026-04-17T23:35:34.308386368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308426 containerd[2094]: time="2026-04-17T23:35:34.308401645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308546 containerd[2094]: time="2026-04-17T23:35:34.308501812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.308789 containerd[2094]: time="2026-04-17T23:35:34.308765807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:34.309027 containerd[2094]: time="2026-04-17T23:35:34.309000883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:34.309080 containerd[2094]: time="2026-04-17T23:35:34.309028106Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:35:34.309155 containerd[2094]: time="2026-04-17T23:35:34.309132419Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:35:34.309215 containerd[2094]: time="2026-04-17T23:35:34.309200074Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:35:34.314489 containerd[2094]: time="2026-04-17T23:35:34.314455880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:35:34.314596 containerd[2094]: time="2026-04-17T23:35:34.314558542Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:35:34.314641 containerd[2094]: time="2026-04-17T23:35:34.314608336Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:35:34.314641 containerd[2094]: time="2026-04-17T23:35:34.314632895Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:35:34.314708 containerd[2094]: time="2026-04-17T23:35:34.314670875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:35:34.315378 containerd[2094]: time="2026-04-17T23:35:34.315232770Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:35:34.316927 containerd[2094]: time="2026-04-17T23:35:34.316625270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:35:34.317376 containerd[2094]: time="2026-04-17T23:35:34.317210969Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:35:34.317376 containerd[2094]: time="2026-04-17T23:35:34.317238433Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:35:34.317611 containerd[2094]: time="2026-04-17T23:35:34.317589175Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:35:34.317682 containerd[2094]: time="2026-04-17T23:35:34.317633813Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317744366Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317770443Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317792689Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317901697Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317926311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.317945079Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318385109Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318417719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318454678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318487050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318522568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318541934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318561767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319377 containerd[2094]: time="2026-04-17T23:35:34.318639339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.318662226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.318683685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319083803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319107813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319127849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319163459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319188286Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319233354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319251743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.319886 containerd[2094]: time="2026-04-17T23:35:34.319268654Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:35:34.320238 containerd[2094]: time="2026-04-17T23:35:34.320059508Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:35:34.320238 containerd[2094]: time="2026-04-17T23:35:34.320152619Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:35:34.320238 containerd[2094]: time="2026-04-17T23:35:34.320179671Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:35:34.320238 containerd[2094]: time="2026-04-17T23:35:34.320206607Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:35:34.320238 containerd[2094]: time="2026-04-17T23:35:34.320225670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.320437 containerd[2094]: time="2026-04-17T23:35:34.320251884Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:35:34.320437 containerd[2094]: time="2026-04-17T23:35:34.320275425Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:35:34.320437 containerd[2094]: time="2026-04-17T23:35:34.320292593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:35:34.323061 containerd[2094]: time="2026-04-17T23:35:34.320788995Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:35:34.323061 containerd[2094]: time="2026-04-17T23:35:34.320899902Z" level=info msg="Connect containerd service" Apr 17 23:35:34.323061 containerd[2094]: time="2026-04-17T23:35:34.320974395Z" level=info msg="using legacy CRI server" Apr 17 23:35:34.323061 containerd[2094]: time="2026-04-17T23:35:34.320985890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:35:34.323061 containerd[2094]: time="2026-04-17T23:35:34.321307973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:35:34.327127 containerd[2094]: time="2026-04-17T23:35:34.327087272Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330718140Z" level=info msg="Start subscribing containerd event" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330784985Z" level=info msg="Start recovering state" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330863101Z" level=info msg="Start event monitor" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330884652Z" level=info msg="Start snapshots syncer" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330899934Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.330909398Z" level=info msg="Start streaming server" Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.331888869Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:35:34.332594 containerd[2094]: time="2026-04-17T23:35:34.332029112Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:35:34.338621 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:35:34.340166 containerd[2094]: time="2026-04-17T23:35:34.339292993Z" level=info msg="containerd successfully booted in 0.085016s" Apr 17 23:35:34.406889 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:34.508070 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:34.606337 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:34.705433 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:35:34.790432 tar[2086]: linux-amd64/README.md Apr 17 23:35:34.805418 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:35:34.811577 sshd[2186]: Accepted publickey for core from 20.229.252.112 port 37016 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:34.811890 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:35:34.814799 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:34.829973 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:35:34.840692 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:35:34.847209 systemd-logind[2063]: New session 1 of user core. Apr 17 23:35:34.864672 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:35:34.881938 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:35:34.892793 (systemd)[2335]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:35:34.906175 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:35:35.006884 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:35:35.038301 systemd[2335]: Queued start job for default target default.target. Apr 17 23:35:35.039257 systemd[2335]: Created slice app.slice - User Application Slice. Apr 17 23:35:35.039290 systemd[2335]: Reached target paths.target - Paths. Apr 17 23:35:35.039308 systemd[2335]: Reached target timers.target - Timers. Apr 17 23:35:35.043470 systemd[2335]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:35:35.052284 systemd[2335]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:35:35.053417 systemd[2335]: Reached target sockets.target - Sockets. Apr 17 23:35:35.053458 systemd[2335]: Reached target basic.target - Basic System. Apr 17 23:35:35.053521 systemd[2335]: Reached target default.target - Main User Target. Apr 17 23:35:35.053561 systemd[2335]: Startup finished in 152ms. Apr 17 23:35:35.053626 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [Registrar] Starting registrar module Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:35 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:35 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:35 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:35:35.057255 amazon-ssm-agent[2136]: 2026-04-17 23:35:35 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:35:35.058721 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:35:35.107548 amazon-ssm-agent[2136]: 2026-04-17 23:35:35 INFO [CredentialRefresher] Next credential rotation will be in 30.499994728333334 minutes Apr 17 23:35:35.780413 systemd[1]: Started sshd@1-172.31.25.61:22-20.229.252.112:38264.service - OpenSSH per-connection server daemon (20.229.252.112:38264). Apr 17 23:35:36.074076 amazon-ssm-agent[2136]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:35:36.178400 amazon-ssm-agent[2136]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2350) started Apr 17 23:35:36.280484 amazon-ssm-agent[2136]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:35:36.798233 sshd[2347]: Accepted publickey for core from 20.229.252.112 port 38264 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:36.800310 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:36.814751 systemd-logind[2063]: New session 2 of user core. Apr 17 23:35:36.822634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:36.823837 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:36.826799 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:35:36.828081 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:35:36.831844 systemd[1]: Startup finished in 9.876s (kernel) + 8.897s (userspace) = 18.774s. Apr 17 23:35:37.506975 sshd[2347]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:37.510568 systemd[1]: sshd@1-172.31.25.61:22-20.229.252.112:38264.service: Deactivated successfully. Apr 17 23:35:37.515884 systemd-logind[2063]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:35:37.516620 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:35:37.517731 systemd-logind[2063]: Removed session 2. Apr 17 23:35:37.678670 systemd[1]: Started sshd@2-172.31.25.61:22-20.229.252.112:38278.service - OpenSSH per-connection server daemon (20.229.252.112:38278). Apr 17 23:35:38.694056 sshd[2384]: Accepted publickey for core from 20.229.252.112 port 38278 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:38.694991 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:38.704090 systemd-logind[2063]: New session 3 of user core. Apr 17 23:35:38.705578 kubelet[2368]: E0417 23:35:38.705534 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:38.712173 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:35:38.712535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:38.712764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:39.396647 sshd[2384]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:39.402034 systemd[1]: sshd@2-172.31.25.61:22-20.229.252.112:38278.service: Deactivated successfully. Apr 17 23:35:39.403720 systemd-logind[2063]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:35:39.406376 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:35:39.407423 systemd-logind[2063]: Removed session 3. Apr 17 23:35:39.568704 systemd[1]: Started sshd@3-172.31.25.61:22-20.229.252.112:38288.service - OpenSSH per-connection server daemon (20.229.252.112:38288). Apr 17 23:35:41.334987 systemd-resolved[1976]: Clock change detected. Flushing caches. Apr 17 23:35:41.719654 sshd[2395]: Accepted publickey for core from 20.229.252.112 port 38288 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:41.720313 sshd[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:41.725909 systemd-logind[2063]: New session 4 of user core. Apr 17 23:35:41.731635 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:35:42.426746 sshd[2395]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:42.431719 systemd[1]: sshd@3-172.31.25.61:22-20.229.252.112:38288.service: Deactivated successfully. Apr 17 23:35:42.436367 systemd-logind[2063]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:35:42.436372 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:35:42.437857 systemd-logind[2063]: Removed session 4. Apr 17 23:35:42.598554 systemd[1]: Started sshd@4-172.31.25.61:22-20.229.252.112:38298.service - OpenSSH per-connection server daemon (20.229.252.112:38298). Apr 17 23:35:43.613491 sshd[2403]: Accepted publickey for core from 20.229.252.112 port 38298 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:43.614184 sshd[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:43.619655 systemd-logind[2063]: New session 5 of user core. Apr 17 23:35:43.629550 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:35:44.197675 sudo[2407]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:35:44.198174 sudo[2407]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:44.213884 sudo[2407]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:44.380476 sshd[2403]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:44.384177 systemd[1]: sshd@4-172.31.25.61:22-20.229.252.112:38298.service: Deactivated successfully. Apr 17 23:35:44.389836 systemd-logind[2063]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:35:44.390300 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:35:44.391679 systemd-logind[2063]: Removed session 5. Apr 17 23:35:44.553553 systemd[1]: Started sshd@5-172.31.25.61:22-20.229.252.112:38314.service - OpenSSH per-connection server daemon (20.229.252.112:38314). Apr 17 23:35:45.568934 sshd[2412]: Accepted publickey for core from 20.229.252.112 port 38314 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:45.569638 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:45.573964 systemd-logind[2063]: New session 6 of user core. Apr 17 23:35:45.581573 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:35:46.111337 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:35:46.111726 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:46.115473 sudo[2417]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:46.120921 sudo[2416]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:35:46.121386 sudo[2416]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:46.141630 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:46.144224 auditctl[2420]: No rules Apr 17 23:35:46.144661 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:35:46.144967 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:46.153671 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:46.180098 augenrules[2439]: No rules Apr 17 23:35:46.181990 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:46.184966 sudo[2416]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:46.352189 sshd[2412]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:46.357657 systemd[1]: sshd@5-172.31.25.61:22-20.229.252.112:38314.service: Deactivated successfully. Apr 17 23:35:46.360936 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:35:46.361765 systemd-logind[2063]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:35:46.363917 systemd-logind[2063]: Removed session 6. Apr 17 23:35:46.531032 systemd[1]: Started sshd@6-172.31.25.61:22-20.229.252.112:44898.service - OpenSSH per-connection server daemon (20.229.252.112:44898). Apr 17 23:35:47.544142 sshd[2448]: Accepted publickey for core from 20.229.252.112 port 44898 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:47.544806 sshd[2448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:47.549255 systemd-logind[2063]: New session 7 of user core. Apr 17 23:35:47.560541 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:35:48.084860 sudo[2452]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:35:48.085278 sudo[2452]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:49.017526 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:35:49.018030 (dockerd)[2467]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:35:49.923044 dockerd[2467]: time="2026-04-17T23:35:49.922980264Z" level=info msg="Starting up" Apr 17 23:35:49.930104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:35:49.938449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:50.405609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:50.416686 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:50.459299 kubelet[2499]: E0417 23:35:50.459245 2499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:50.463436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:50.463698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:50.520814 dockerd[2467]: time="2026-04-17T23:35:50.520586530Z" level=info msg="Loading containers: start." Apr 17 23:35:50.664224 kernel: Initializing XFRM netlink socket Apr 17 23:35:50.720124 (udev-worker)[2510]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:50.782956 systemd-networkd[1649]: docker0: Link UP Apr 17 23:35:50.804846 dockerd[2467]: time="2026-04-17T23:35:50.804800291Z" level=info msg="Loading containers: done." Apr 17 23:35:50.836397 dockerd[2467]: time="2026-04-17T23:35:50.836346582Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:35:50.836579 dockerd[2467]: time="2026-04-17T23:35:50.836466839Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:35:50.836628 dockerd[2467]: time="2026-04-17T23:35:50.836598961Z" level=info msg="Daemon has completed initialization" Apr 17 23:35:50.880337 dockerd[2467]: time="2026-04-17T23:35:50.879692182Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:35:50.880025 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:35:51.611256 containerd[2094]: time="2026-04-17T23:35:51.611215873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:35:52.186680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542285255.mount: Deactivated successfully. Apr 17 23:35:53.464944 containerd[2094]: time="2026-04-17T23:35:53.464895820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.466552 containerd[2094]: time="2026-04-17T23:35:53.466361716Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 17 23:35:53.467814 containerd[2094]: time="2026-04-17T23:35:53.467757798Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.471379 containerd[2094]: time="2026-04-17T23:35:53.471344282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.473028 containerd[2094]: time="2026-04-17T23:35:53.472174225Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.860913238s" Apr 17 23:35:53.473028 containerd[2094]: time="2026-04-17T23:35:53.472235175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:35:53.473350 containerd[2094]: time="2026-04-17T23:35:53.473320091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:35:54.905957 containerd[2094]: time="2026-04-17T23:35:54.905901180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:54.907267 containerd[2094]: time="2026-04-17T23:35:54.907209750Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 17 23:35:54.908717 containerd[2094]: time="2026-04-17T23:35:54.908662169Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:54.911763 containerd[2094]: time="2026-04-17T23:35:54.911712819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:54.912994 containerd[2094]: time="2026-04-17T23:35:54.912852714Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.439495878s" Apr 17 23:35:54.912994 containerd[2094]: time="2026-04-17T23:35:54.912892672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:35:54.913956 containerd[2094]: time="2026-04-17T23:35:54.913922241Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:35:56.262850 containerd[2094]: time="2026-04-17T23:35:56.262787361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:56.264052 containerd[2094]: time="2026-04-17T23:35:56.263994196Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 17 23:35:56.265496 containerd[2094]: time="2026-04-17T23:35:56.265446084Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:56.268541 containerd[2094]: time="2026-04-17T23:35:56.268475690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:56.269749 containerd[2094]: time="2026-04-17T23:35:56.269606065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.355647852s" Apr 17 23:35:56.269749 containerd[2094]: time="2026-04-17T23:35:56.269645831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:35:56.270682 containerd[2094]: time="2026-04-17T23:35:56.270648789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:35:57.512205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265338949.mount: Deactivated successfully. Apr 17 23:35:58.082611 containerd[2094]: time="2026-04-17T23:35:58.082557224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:58.083818 containerd[2094]: time="2026-04-17T23:35:58.083666182Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 17 23:35:58.084990 containerd[2094]: time="2026-04-17T23:35:58.084928001Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:58.087349 containerd[2094]: time="2026-04-17T23:35:58.087288514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:58.088255 containerd[2094]: time="2026-04-17T23:35:58.088022038Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.817337736s" Apr 17 23:35:58.088255 containerd[2094]: time="2026-04-17T23:35:58.088062909Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:35:58.088839 containerd[2094]: time="2026-04-17T23:35:58.088801659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:35:58.623947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914841872.mount: Deactivated successfully. Apr 17 23:35:59.892840 containerd[2094]: time="2026-04-17T23:35:59.892779867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.894596 containerd[2094]: time="2026-04-17T23:35:59.894535099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 17 23:35:59.898349 containerd[2094]: time="2026-04-17T23:35:59.898277939Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.905144 containerd[2094]: time="2026-04-17T23:35:59.905068825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.906562 containerd[2094]: time="2026-04-17T23:35:59.906380044Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.817539128s" Apr 17 23:35:59.906562 containerd[2094]: time="2026-04-17T23:35:59.906426839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:35:59.907796 containerd[2094]: time="2026-04-17T23:35:59.907574066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:36:00.432061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267981711.mount: Deactivated successfully. Apr 17 23:36:00.439683 containerd[2094]: time="2026-04-17T23:36:00.439629160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:00.440900 containerd[2094]: time="2026-04-17T23:36:00.440735045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 17 23:36:00.442566 containerd[2094]: time="2026-04-17T23:36:00.442293886Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:00.445084 containerd[2094]: time="2026-04-17T23:36:00.445031673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:00.445940 containerd[2094]: time="2026-04-17T23:36:00.445786813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.17797ms" Apr 17 23:36:00.445940 containerd[2094]: time="2026-04-17T23:36:00.445832414Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:36:00.446683 containerd[2094]: time="2026-04-17T23:36:00.446653730Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:36:00.714025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:36:00.719417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:00.976439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:00.983494 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:36:01.018481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4763024.mount: Deactivated successfully. Apr 17 23:36:01.083050 kubelet[2766]: E0417 23:36:01.082929 2766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:36:01.087337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:36:01.087801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:36:02.204254 containerd[2094]: time="2026-04-17T23:36:02.204182119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:02.206084 containerd[2094]: time="2026-04-17T23:36:02.206017960Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 17 23:36:02.208333 containerd[2094]: time="2026-04-17T23:36:02.208269582Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:02.212400 containerd[2094]: time="2026-04-17T23:36:02.212354257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:02.214645 containerd[2094]: time="2026-04-17T23:36:02.213589698Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.766902629s" Apr 17 23:36:02.214645 containerd[2094]: time="2026-04-17T23:36:02.213632274Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:36:05.423719 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:36:05.475942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:05.493510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:05.527466 systemd[1]: Reloading requested from client PID 2870 ('systemctl') (unit session-7.scope)... Apr 17 23:36:05.527483 systemd[1]: Reloading... Apr 17 23:36:05.656247 zram_generator::config[2913]: No configuration found. Apr 17 23:36:05.821965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:05.906556 systemd[1]: Reloading finished in 378 ms. Apr 17 23:36:05.961686 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:36:05.961825 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:36:05.962625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:05.970733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:06.232415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:06.235616 (kubelet)[2986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:36:06.291899 kubelet[2986]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:06.291899 kubelet[2986]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:36:06.291899 kubelet[2986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:06.294951 kubelet[2986]: I0417 23:36:06.294804 2986 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:36:06.596009 kubelet[2986]: I0417 23:36:06.595891 2986 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:36:06.596009 kubelet[2986]: I0417 23:36:06.595929 2986 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:36:06.596317 kubelet[2986]: I0417 23:36:06.596291 2986 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:36:06.650217 kubelet[2986]: E0417 23:36:06.649424 2986 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:36:06.653064 kubelet[2986]: I0417 23:36:06.652909 2986 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:36:06.666647 kubelet[2986]: E0417 23:36:06.666608 2986 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:36:06.666647 kubelet[2986]: I0417 23:36:06.666647 2986 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:36:06.673948 kubelet[2986]: I0417 23:36:06.673896 2986 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:36:06.677042 kubelet[2986]: I0417 23:36:06.676984 2986 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:36:06.681272 kubelet[2986]: I0417 23:36:06.677035 2986 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:36:06.682547 kubelet[2986]: I0417 23:36:06.682510 2986 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:36:06.682547 kubelet[2986]: I0417 23:36:06.682546 2986 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:36:06.686673 kubelet[2986]: I0417 23:36:06.686646 2986 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:06.694964 kubelet[2986]: I0417 23:36:06.694922 2986 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:36:06.694964 kubelet[2986]: I0417 23:36:06.694967 2986 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:36:06.696090 kubelet[2986]: I0417 23:36:06.695010 2986 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:36:06.696090 kubelet[2986]: I0417 23:36:06.695038 2986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:36:06.706155 kubelet[2986]: I0417 23:36:06.706123 2986 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:36:06.706844 kubelet[2986]: I0417 23:36:06.706820 2986 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:36:06.708072 kubelet[2986]: W0417 23:36:06.708043 2986 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:36:06.709217 kubelet[2986]: E0417 23:36:06.709166 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-61&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:36:06.709345 kubelet[2986]: E0417 23:36:06.709320 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:36:06.715352 kubelet[2986]: I0417 23:36:06.715326 2986 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:36:06.715446 kubelet[2986]: I0417 23:36:06.715377 2986 server.go:1289] "Started kubelet" Apr 17 23:36:06.715578 kubelet[2986]: I0417 23:36:06.715520 2986 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:36:06.716398 kubelet[2986]: I0417 23:36:06.716372 2986 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:36:06.720122 kubelet[2986]: I0417 23:36:06.720060 2986 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:36:06.720439 kubelet[2986]: I0417 23:36:06.720413 2986 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:36:06.723612 kubelet[2986]: E0417 23:36:06.720544 2986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-61.18a7491cf392c2fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-61,UID:ip-172-31-25-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-61,},FirstTimestamp:2026-04-17 23:36:06.71534361 +0000 UTC m=+0.474294632,LastTimestamp:2026-04-17 23:36:06.71534361 +0000 UTC m=+0.474294632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-61,}" Apr 17 23:36:06.726970 kubelet[2986]: I0417 23:36:06.726935 2986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:36:06.729565 kubelet[2986]: I0417 23:36:06.728899 2986 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:36:06.735246 kubelet[2986]: E0417 23:36:06.733081 2986 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-61\" not found" Apr 17 23:36:06.735246 kubelet[2986]: I0417 23:36:06.733149 2986 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:36:06.735246 kubelet[2986]: I0417 23:36:06.733466 2986 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:36:06.735246 kubelet[2986]: I0417 23:36:06.733625 2986 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:36:06.735246 kubelet[2986]: E0417 23:36:06.735007 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:06.739857 kubelet[2986]: E0417 23:36:06.739815 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="200ms" Apr 17 23:36:06.740798 kubelet[2986]: I0417 23:36:06.740773 2986 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:36:06.741053 kubelet[2986]: I0417 23:36:06.741029 2986 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:36:06.744462 kubelet[2986]: I0417 23:36:06.744430 2986 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:36:06.745046 kubelet[2986]: E0417 23:36:06.745027 2986 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:36:06.749260 kubelet[2986]: I0417 23:36:06.749240 2986 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:36:06.772250 kubelet[2986]: I0417 23:36:06.772189 2986 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:36:06.772250 kubelet[2986]: I0417 23:36:06.772249 2986 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:36:06.772412 kubelet[2986]: I0417 23:36:06.772274 2986 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:36:06.772412 kubelet[2986]: I0417 23:36:06.772299 2986 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:36:06.772412 kubelet[2986]: E0417 23:36:06.772345 2986 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:36:06.775704 kubelet[2986]: E0417 23:36:06.775668 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:36:06.792892 kubelet[2986]: I0417 23:36:06.792027 2986 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:36:06.792892 kubelet[2986]: I0417 23:36:06.792593 2986 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:36:06.792892 kubelet[2986]: I0417 23:36:06.792621 2986 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:06.795950 kubelet[2986]: I0417 23:36:06.795917 2986 policy_none.go:49] "None policy: Start" Apr 17 23:36:06.795950 kubelet[2986]: I0417 23:36:06.795950 2986 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:36:06.796119 kubelet[2986]: I0417 23:36:06.795964 2986 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:36:06.801328 kubelet[2986]: E0417 23:36:06.801293 2986 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:36:06.801534 kubelet[2986]: I0417 23:36:06.801514 2986 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:36:06.801597 kubelet[2986]: I0417 23:36:06.801532 2986 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:36:06.807324 kubelet[2986]: I0417 23:36:06.807084 2986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:36:06.807446 kubelet[2986]: E0417 23:36:06.807387 2986 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:36:06.807446 kubelet[2986]: E0417 23:36:06.807427 2986 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-61\" not found" Apr 17 23:36:06.880746 kubelet[2986]: E0417 23:36:06.880292 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:06.886883 kubelet[2986]: E0417 23:36:06.886856 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:06.890254 kubelet[2986]: E0417 23:36:06.889914 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:06.903659 kubelet[2986]: I0417 23:36:06.903624 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:06.904020 kubelet[2986]: E0417 23:36:06.903969 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:06.940869 kubelet[2986]: E0417 23:36:06.940788 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="400ms" Apr 17 23:36:07.035287 kubelet[2986]: I0417 23:36:07.035223 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-ca-certs\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:07.035287 kubelet[2986]: I0417 23:36:07.035293 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:07.035575 kubelet[2986]: I0417 23:36:07.035323 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:07.035575 kubelet[2986]: I0417 23:36:07.035382 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:07.035575 kubelet[2986]: I0417 23:36:07.035427 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:07.035575 kubelet[2986]: I0417 23:36:07.035460 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:07.035575 kubelet[2986]: I0417 23:36:07.035513 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:07.035759 kubelet[2986]: I0417 23:36:07.035538 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:07.035759 kubelet[2986]: I0417 23:36:07.035566 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5db017a1b885679c2625c45d48220a5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-61\" (UID: \"5db017a1b885679c2625c45d48220a5c\") " pod="kube-system/kube-scheduler-ip-172-31-25-61" Apr 17 23:36:07.105880 kubelet[2986]: I0417 23:36:07.105847 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:07.106279 kubelet[2986]: E0417 23:36:07.106241 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:07.182171 containerd[2094]: time="2026-04-17T23:36:07.182046447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-61,Uid:b9e751d7971f62679a1448bd0200b94b,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:07.188803 containerd[2094]: time="2026-04-17T23:36:07.188765417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-61,Uid:1f73ebdb206f65b444d8053fe602f911,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:07.191137 containerd[2094]: time="2026-04-17T23:36:07.190912135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-61,Uid:5db017a1b885679c2625c45d48220a5c,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:07.342024 kubelet[2986]: E0417 23:36:07.341964 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="800ms" Apr 17 23:36:07.508000 kubelet[2986]: I0417 23:36:07.507912 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:07.508304 kubelet[2986]: E0417 23:36:07.508272 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:07.629880 kubelet[2986]: E0417 23:36:07.629757 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:07.658898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960677626.mount: Deactivated successfully. Apr 17 23:36:07.668431 containerd[2094]: time="2026-04-17T23:36:07.668374906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:07.669695 containerd[2094]: time="2026-04-17T23:36:07.669652423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:07.670706 containerd[2094]: time="2026-04-17T23:36:07.670663201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:36:07.671869 containerd[2094]: time="2026-04-17T23:36:07.671818792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:36:07.673129 containerd[2094]: time="2026-04-17T23:36:07.673091843Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:07.674575 containerd[2094]: time="2026-04-17T23:36:07.674536679Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:07.675527 containerd[2094]: time="2026-04-17T23:36:07.675466619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:36:07.677927 containerd[2094]: time="2026-04-17T23:36:07.677831987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:07.680836 containerd[2094]: time="2026-04-17T23:36:07.679390941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.540352ms" Apr 17 23:36:07.680836 containerd[2094]: time="2026-04-17T23:36:07.680776278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.637118ms" Apr 17 23:36:07.685080 containerd[2094]: time="2026-04-17T23:36:07.685044489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.055689ms" Apr 17 23:36:07.793297 kubelet[2986]: E0417 23:36:07.791642 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:36:08.063329 kubelet[2986]: E0417 23:36:08.063186 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-61&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:36:08.088743 containerd[2094]: time="2026-04-17T23:36:08.088651022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:08.088743 containerd[2094]: time="2026-04-17T23:36:08.088712672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:08.088952 containerd[2094]: time="2026-04-17T23:36:08.088909124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.089254 containerd[2094]: time="2026-04-17T23:36:08.089105305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.095255 containerd[2094]: time="2026-04-17T23:36:08.092704642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:08.095255 containerd[2094]: time="2026-04-17T23:36:08.092772844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:08.095255 containerd[2094]: time="2026-04-17T23:36:08.092795827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.095255 containerd[2094]: time="2026-04-17T23:36:08.092941816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.104070 kubelet[2986]: E0417 23:36:08.103997 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:36:08.104522 containerd[2094]: time="2026-04-17T23:36:08.103265240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:08.104522 containerd[2094]: time="2026-04-17T23:36:08.103362953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:08.104522 containerd[2094]: time="2026-04-17T23:36:08.103387252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.104522 containerd[2094]: time="2026-04-17T23:36:08.104401824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:08.142539 kubelet[2986]: E0417 23:36:08.142498 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="1.6s" Apr 17 23:36:08.212454 containerd[2094]: time="2026-04-17T23:36:08.212348374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-61,Uid:1f73ebdb206f65b444d8053fe602f911,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c4c66deeb9fa1c1b167b159db79ca76227fdaff6308837c0f42c0f335d7a999\"" Apr 17 23:36:08.226573 containerd[2094]: time="2026-04-17T23:36:08.226065353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-61,Uid:5db017a1b885679c2625c45d48220a5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfb0d54c07cdfc132787be2915b9badcf5cbaeac8419c6d0383ab765f5b7d3df\"" Apr 17 23:36:08.229364 containerd[2094]: time="2026-04-17T23:36:08.229234988Z" level=info msg="CreateContainer within sandbox \"3c4c66deeb9fa1c1b167b159db79ca76227fdaff6308837c0f42c0f335d7a999\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:36:08.233671 containerd[2094]: time="2026-04-17T23:36:08.233626582Z" level=info msg="CreateContainer within sandbox \"dfb0d54c07cdfc132787be2915b9badcf5cbaeac8419c6d0383ab765f5b7d3df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:36:08.244424 containerd[2094]: time="2026-04-17T23:36:08.244375322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-61,Uid:b9e751d7971f62679a1448bd0200b94b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e71a5c3c866e93d862e0a1d9d13a04fee05543c161504926b76d6c8fd0138b4e\"" Apr 17 23:36:08.253150 containerd[2094]: time="2026-04-17T23:36:08.253112965Z" level=info msg="CreateContainer within sandbox \"e71a5c3c866e93d862e0a1d9d13a04fee05543c161504926b76d6c8fd0138b4e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:36:08.289507 containerd[2094]: time="2026-04-17T23:36:08.289456658Z" level=info msg="CreateContainer within sandbox \"e71a5c3c866e93d862e0a1d9d13a04fee05543c161504926b76d6c8fd0138b4e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb0fb57c200cfe441fcd4bf5eaeec97c13d9469a2c526e84798b8239a23d5ce4\"" Apr 17 23:36:08.291328 containerd[2094]: time="2026-04-17T23:36:08.290277095Z" level=info msg="StartContainer for \"cb0fb57c200cfe441fcd4bf5eaeec97c13d9469a2c526e84798b8239a23d5ce4\"" Apr 17 23:36:08.296087 containerd[2094]: time="2026-04-17T23:36:08.296044473Z" level=info msg="CreateContainer within sandbox \"3c4c66deeb9fa1c1b167b159db79ca76227fdaff6308837c0f42c0f335d7a999\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249\"" Apr 17 23:36:08.296812 containerd[2094]: time="2026-04-17T23:36:08.296785269Z" level=info msg="StartContainer for \"0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249\"" Apr 17 23:36:08.300343 containerd[2094]: time="2026-04-17T23:36:08.300311625Z" level=info msg="CreateContainer within sandbox \"dfb0d54c07cdfc132787be2915b9badcf5cbaeac8419c6d0383ab765f5b7d3df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e\"" Apr 17 23:36:08.301519 containerd[2094]: time="2026-04-17T23:36:08.301492129Z" level=info msg="StartContainer for \"8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e\"" Apr 17 23:36:08.310697 kubelet[2986]: I0417 23:36:08.310666 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:08.311557 kubelet[2986]: E0417 23:36:08.311527 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:08.440442 containerd[2094]: time="2026-04-17T23:36:08.438663278Z" level=info msg="StartContainer for \"0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249\" returns successfully" Apr 17 23:36:08.454641 containerd[2094]: time="2026-04-17T23:36:08.454439534Z" level=info msg="StartContainer for \"8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e\" returns successfully" Apr 17 23:36:08.492486 containerd[2094]: time="2026-04-17T23:36:08.491990008Z" level=info msg="StartContainer for \"cb0fb57c200cfe441fcd4bf5eaeec97c13d9469a2c526e84798b8239a23d5ce4\" returns successfully" Apr 17 23:36:08.792229 kubelet[2986]: E0417 23:36:08.790799 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:08.798218 kubelet[2986]: E0417 23:36:08.797541 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:08.802557 kubelet[2986]: E0417 23:36:08.802531 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:08.838543 kubelet[2986]: E0417 23:36:08.838502 2986 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:36:09.743039 kubelet[2986]: E0417 23:36:09.742992 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="3.2s" Apr 17 23:36:09.804526 kubelet[2986]: E0417 23:36:09.804362 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:09.804526 kubelet[2986]: E0417 23:36:09.804468 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:09.826323 kubelet[2986]: E0417 23:36:09.826277 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:36:09.913734 kubelet[2986]: I0417 23:36:09.913702 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:09.914129 kubelet[2986]: E0417 23:36:09.914098 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:09.961451 kubelet[2986]: E0417 23:36:09.961405 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:36:10.009593 kubelet[2986]: E0417 23:36:10.009474 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:10.939470 kubelet[2986]: E0417 23:36:10.939427 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-61&limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:36:11.588353 kubelet[2986]: E0417 23:36:11.588320 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:11.625552 kubelet[2986]: E0417 23:36:11.625451 2986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-61.18a7491cf392c2fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-61,UID:ip-172-31-25-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-61,},FirstTimestamp:2026-04-17 23:36:06.71534361 +0000 UTC m=+0.474294632,LastTimestamp:2026-04-17 23:36:06.71534361 +0000 UTC m=+0.474294632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-61,}" Apr 17 23:36:11.638512 kubelet[2986]: E0417 23:36:11.638476 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:12.877325 kubelet[2986]: E0417 23:36:12.877280 2986 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:36:12.944401 kubelet[2986]: E0417 23:36:12.944354 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": dial tcp 172.31.25.61:6443: connect: connection refused" interval="6.4s" Apr 17 23:36:13.116220 kubelet[2986]: I0417 23:36:13.115972 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:13.116351 kubelet[2986]: E0417 23:36:13.116275 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.61:6443/api/v1/nodes\": dial tcp 172.31.25.61:6443: connect: connection refused" node="ip-172-31-25-61" Apr 17 23:36:15.054422 kubelet[2986]: E0417 23:36:15.054375 2986 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:16.807999 kubelet[2986]: E0417 23:36:16.807943 2986 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-61\" not found" Apr 17 23:36:17.209480 kubelet[2986]: E0417 23:36:17.209356 2986 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-61" not found Apr 17 23:36:17.573550 kubelet[2986]: E0417 23:36:17.573428 2986 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-61" not found Apr 17 23:36:17.706458 kubelet[2986]: I0417 23:36:17.706177 2986 apiserver.go:52] "Watching apiserver" Apr 17 23:36:17.734294 kubelet[2986]: I0417 23:36:17.734254 2986 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:36:17.912861 kubelet[2986]: E0417 23:36:17.912681 2986 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:18.174825 kubelet[2986]: E0417 23:36:18.174717 2986 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-61" not found Apr 17 23:36:19.194887 update_engine[2069]: I20260417 23:36:19.194811 2069 update_attempter.cc:509] Updating boot flags... Apr 17 23:36:19.250294 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3284) Apr 17 23:36:19.354281 kubelet[2986]: E0417 23:36:19.353052 2986 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-61\" not found" node="ip-172-31-25-61" Apr 17 23:36:19.453307 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3288) Apr 17 23:36:19.524752 kubelet[2986]: I0417 23:36:19.524704 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:19.537465 kubelet[2986]: I0417 23:36:19.537425 2986 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-61" Apr 17 23:36:19.635562 kubelet[2986]: I0417 23:36:19.635521 2986 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:19.661047 kubelet[2986]: I0417 23:36:19.657903 2986 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:19.664810 kubelet[2986]: I0417 23:36:19.664776 2986 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-61" Apr 17 23:36:19.683216 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3288) Apr 17 23:36:19.912659 systemd[1]: Reloading requested from client PID 3538 ('systemctl') (unit session-7.scope)... Apr 17 23:36:19.912678 systemd[1]: Reloading... Apr 17 23:36:20.016534 zram_generator::config[3574]: No configuration found. Apr 17 23:36:20.171123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:20.269387 systemd[1]: Reloading finished in 356 ms. Apr 17 23:36:20.312960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:20.331789 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:36:20.332207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:20.341591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:21.139412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:21.152936 (kubelet)[3648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:36:21.247527 kubelet[3648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:21.247527 kubelet[3648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:36:21.247527 kubelet[3648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:21.247527 kubelet[3648]: I0417 23:36:21.246232 3648 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:36:21.260229 kubelet[3648]: I0417 23:36:21.258759 3648 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:36:21.260229 kubelet[3648]: I0417 23:36:21.258791 3648 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:36:21.260229 kubelet[3648]: I0417 23:36:21.259273 3648 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:36:21.261503 kubelet[3648]: I0417 23:36:21.261480 3648 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:36:21.265876 sudo[3659]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:36:21.266729 sudo[3659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:36:21.269935 kubelet[3648]: I0417 23:36:21.269749 3648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:36:21.297357 kubelet[3648]: E0417 23:36:21.297310 3648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:36:21.297510 kubelet[3648]: I0417 23:36:21.297380 3648 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:36:21.302890 kubelet[3648]: I0417 23:36:21.302525 3648 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:36:21.303733 kubelet[3648]: I0417 23:36:21.303127 3648 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:36:21.303733 kubelet[3648]: I0417 23:36:21.303172 3648 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:36:21.303733 kubelet[3648]: I0417 23:36:21.303418 3648 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:36:21.303733 kubelet[3648]: I0417 23:36:21.303432 3648 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:36:21.305922 kubelet[3648]: I0417 23:36:21.305293 3648 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:21.307882 kubelet[3648]: I0417 23:36:21.307860 3648 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:36:21.307972 kubelet[3648]: I0417 23:36:21.307888 3648 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:36:21.307972 kubelet[3648]: I0417 23:36:21.307920 3648 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:36:21.307972 kubelet[3648]: I0417 23:36:21.307939 3648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:36:21.314860 kubelet[3648]: I0417 23:36:21.314832 3648 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:36:21.315612 kubelet[3648]: I0417 23:36:21.315591 3648 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:36:21.352433 kubelet[3648]: I0417 23:36:21.352408 3648 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:36:21.352554 kubelet[3648]: I0417 23:36:21.352461 3648 server.go:1289] "Started kubelet" Apr 17 23:36:21.358303 kubelet[3648]: I0417 23:36:21.357955 3648 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:36:21.362058 kubelet[3648]: I0417 23:36:21.360523 3648 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:36:21.362058 kubelet[3648]: I0417 23:36:21.360782 3648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:36:21.362058 kubelet[3648]: I0417 23:36:21.361109 3648 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:36:21.364517 kubelet[3648]: I0417 23:36:21.364429 3648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:36:21.379054 kubelet[3648]: I0417 23:36:21.377591 3648 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:36:21.379054 kubelet[3648]: I0417 23:36:21.378478 3648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:36:21.379054 kubelet[3648]: I0417 23:36:21.378836 3648 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:36:21.380625 kubelet[3648]: E0417 23:36:21.379877 3648 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:36:21.382479 kubelet[3648]: I0417 23:36:21.381655 3648 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:36:21.382479 kubelet[3648]: I0417 23:36:21.382084 3648 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:36:21.385463 kubelet[3648]: I0417 23:36:21.382959 3648 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:36:21.389767 kubelet[3648]: I0417 23:36:21.389667 3648 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:36:21.431136 kubelet[3648]: I0417 23:36:21.431083 3648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:36:21.435877 kubelet[3648]: I0417 23:36:21.433319 3648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:36:21.435877 kubelet[3648]: I0417 23:36:21.433347 3648 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:36:21.435877 kubelet[3648]: I0417 23:36:21.433376 3648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:36:21.435877 kubelet[3648]: I0417 23:36:21.433385 3648 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:36:21.435877 kubelet[3648]: E0417 23:36:21.433436 3648 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:36:21.515266 kubelet[3648]: I0417 23:36:21.515240 3648 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:36:21.515438 kubelet[3648]: I0417 23:36:21.515420 3648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:36:21.515530 kubelet[3648]: I0417 23:36:21.515522 3648 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:21.515744 kubelet[3648]: I0417 23:36:21.515732 3648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:36:21.515836 kubelet[3648]: I0417 23:36:21.515813 3648 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:36:21.516285 kubelet[3648]: I0417 23:36:21.515906 3648 policy_none.go:49] "None policy: Start" Apr 17 23:36:21.516285 kubelet[3648]: I0417 23:36:21.515921 3648 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:36:21.516285 kubelet[3648]: I0417 23:36:21.515934 3648 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:36:21.516285 kubelet[3648]: I0417 23:36:21.516067 3648 state_mem.go:75] "Updated machine memory state" Apr 17 23:36:21.518830 kubelet[3648]: E0417 23:36:21.518168 3648 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:36:21.519178 kubelet[3648]: I0417 23:36:21.519104 3648 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:36:21.519830 kubelet[3648]: I0417 23:36:21.519286 3648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:36:21.523703 kubelet[3648]: I0417 23:36:21.523467 3648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:36:21.527648 kubelet[3648]: E0417 23:36:21.527627 3648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:36:21.534173 kubelet[3648]: I0417 23:36:21.534149 3648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:21.536073 kubelet[3648]: I0417 23:36:21.534821 3648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-61" Apr 17 23:36:21.538211 kubelet[3648]: I0417 23:36:21.534995 3648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.547306 kubelet[3648]: E0417 23:36:21.546736 3648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-61\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:21.548391 kubelet[3648]: E0417 23:36:21.547787 3648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-61\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-61" Apr 17 23:36:21.552444 kubelet[3648]: E0417 23:36:21.552343 3648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-61\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585075 kubelet[3648]: I0417 23:36:21.585034 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:21.585075 kubelet[3648]: I0417 23:36:21.585084 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585300 kubelet[3648]: I0417 23:36:21.585105 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585300 kubelet[3648]: I0417 23:36:21.585127 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585300 kubelet[3648]: I0417 23:36:21.585154 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585300 kubelet[3648]: I0417 23:36:21.585179 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-ca-certs\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:21.585300 kubelet[3648]: I0417 23:36:21.585210 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f73ebdb206f65b444d8053fe602f911-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-61\" (UID: \"1f73ebdb206f65b444d8053fe602f911\") " pod="kube-system/kube-controller-manager-ip-172-31-25-61" Apr 17 23:36:21.585499 kubelet[3648]: I0417 23:36:21.585233 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5db017a1b885679c2625c45d48220a5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-61\" (UID: \"5db017a1b885679c2625c45d48220a5c\") " pod="kube-system/kube-scheduler-ip-172-31-25-61" Apr 17 23:36:21.585499 kubelet[3648]: I0417 23:36:21.585272 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9e751d7971f62679a1448bd0200b94b-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-61\" (UID: \"b9e751d7971f62679a1448bd0200b94b\") " pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:21.632232 kubelet[3648]: I0417 23:36:21.632175 3648 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-61" Apr 17 23:36:21.641139 kubelet[3648]: I0417 23:36:21.640703 3648 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-61" Apr 17 23:36:21.641139 kubelet[3648]: I0417 23:36:21.640773 3648 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-61" Apr 17 23:36:22.022999 sudo[3659]: pam_unix(sudo:session): session closed for user root Apr 17 23:36:22.313748 kubelet[3648]: I0417 23:36:22.313399 3648 apiserver.go:52] "Watching apiserver" Apr 17 23:36:22.379316 kubelet[3648]: I0417 23:36:22.379260 3648 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:36:22.469404 kubelet[3648]: I0417 23:36:22.468603 3648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:22.477065 kubelet[3648]: E0417 23:36:22.476659 3648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-61\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-61" Apr 17 23:36:22.519578 kubelet[3648]: I0417 23:36:22.519512 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-61" podStartSLOduration=3.519488308 podStartE2EDuration="3.519488308s" podCreationTimestamp="2026-04-17 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:22.502778562 +0000 UTC m=+1.332780625" watchObservedRunningTime="2026-04-17 23:36:22.519488308 +0000 UTC m=+1.349490365" Apr 17 23:36:22.540656 kubelet[3648]: I0417 23:36:22.540588 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-61" podStartSLOduration=3.540566301 podStartE2EDuration="3.540566301s" podCreationTimestamp="2026-04-17 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:22.522582079 +0000 UTC m=+1.352584143" watchObservedRunningTime="2026-04-17 23:36:22.540566301 +0000 UTC m=+1.370568358" Apr 17 23:36:22.540856 kubelet[3648]: I0417 23:36:22.540752 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-61" podStartSLOduration=3.5407441139999998 podStartE2EDuration="3.540744114s" podCreationTimestamp="2026-04-17 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:22.540461295 +0000 UTC m=+1.370463357" watchObservedRunningTime="2026-04-17 23:36:22.540744114 +0000 UTC m=+1.370746180" Apr 17 23:36:23.857986 sudo[2452]: pam_unix(sudo:session): session closed for user root Apr 17 23:36:24.024687 sshd[2448]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:24.029229 systemd-logind[2063]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:36:24.030609 systemd[1]: sshd@6-172.31.25.61:22-20.229.252.112:44898.service: Deactivated successfully. Apr 17 23:36:24.034824 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:36:24.036564 systemd-logind[2063]: Removed session 7. Apr 17 23:36:24.351398 kubelet[3648]: I0417 23:36:24.351314 3648 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:36:24.356221 containerd[2094]: time="2026-04-17T23:36:24.354755907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:36:24.356689 kubelet[3648]: I0417 23:36:24.355251 3648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:36:30.645158 kubelet[3648]: I0417 23:36:30.645099 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-config-path\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645158 kubelet[3648]: I0417 23:36:30.645165 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-hubble-tls\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645158 kubelet[3648]: I0417 23:36:30.645189 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-bpf-maps\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645819 kubelet[3648]: I0417 23:36:30.645222 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-hostproc\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645819 kubelet[3648]: I0417 23:36:30.645245 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-xtables-lock\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645819 kubelet[3648]: I0417 23:36:30.645269 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz8h\" (UniqueName: \"kubernetes.io/projected/17852fc7-e091-4b82-95f4-4543d8860eea-kube-api-access-ffz8h\") pod \"cilium-operator-6c4d7847fc-7xdzh\" (UID: \"17852fc7-e091-4b82-95f4-4543d8860eea\") " pod="kube-system/cilium-operator-6c4d7847fc-7xdzh" Apr 17 23:36:30.645819 kubelet[3648]: I0417 23:36:30.645293 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-cgroup\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645819 kubelet[3648]: I0417 23:36:30.645319 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-run\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645957 kubelet[3648]: I0417 23:36:30.645347 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-net\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.645957 kubelet[3648]: I0417 23:36:30.645375 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd95d969-1b65-46c7-89e9-83f84fb0472f-xtables-lock\") pod \"kube-proxy-lv7c7\" (UID: \"fd95d969-1b65-46c7-89e9-83f84fb0472f\") " pod="kube-system/kube-proxy-lv7c7" Apr 17 23:36:30.645957 kubelet[3648]: I0417 23:36:30.645395 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd95d969-1b65-46c7-89e9-83f84fb0472f-lib-modules\") pod \"kube-proxy-lv7c7\" (UID: \"fd95d969-1b65-46c7-89e9-83f84fb0472f\") " pod="kube-system/kube-proxy-lv7c7" Apr 17 23:36:30.645957 kubelet[3648]: I0417 23:36:30.645417 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17852fc7-e091-4b82-95f4-4543d8860eea-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7xdzh\" (UID: \"17852fc7-e091-4b82-95f4-4543d8860eea\") " pod="kube-system/cilium-operator-6c4d7847fc-7xdzh" Apr 17 23:36:30.645957 kubelet[3648]: I0417 23:36:30.645442 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-etc-cni-netd\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.646085 kubelet[3648]: I0417 23:36:30.645466 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3694d85c-3026-441e-a9ac-335fc3ba1b45-clustermesh-secrets\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.646085 kubelet[3648]: I0417 23:36:30.645490 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd95d969-1b65-46c7-89e9-83f84fb0472f-kube-proxy\") pod \"kube-proxy-lv7c7\" (UID: \"fd95d969-1b65-46c7-89e9-83f84fb0472f\") " pod="kube-system/kube-proxy-lv7c7" Apr 17 23:36:30.646085 kubelet[3648]: I0417 23:36:30.645519 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wq5r\" (UniqueName: \"kubernetes.io/projected/fd95d969-1b65-46c7-89e9-83f84fb0472f-kube-api-access-5wq5r\") pod \"kube-proxy-lv7c7\" (UID: \"fd95d969-1b65-46c7-89e9-83f84fb0472f\") " pod="kube-system/kube-proxy-lv7c7" Apr 17 23:36:30.646085 kubelet[3648]: I0417 23:36:30.645543 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cni-path\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.646085 kubelet[3648]: I0417 23:36:30.645567 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-kernel\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.646255 kubelet[3648]: I0417 23:36:30.645594 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdjt\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-kube-api-access-lxdjt\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.646255 kubelet[3648]: I0417 23:36:30.645617 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-lib-modules\") pod \"cilium-ln5rr\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " pod="kube-system/cilium-ln5rr" Apr 17 23:36:30.847500 containerd[2094]: time="2026-04-17T23:36:30.847125953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ln5rr,Uid:3694d85c-3026-441e-a9ac-335fc3ba1b45,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:30.852139 containerd[2094]: time="2026-04-17T23:36:30.852095395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7xdzh,Uid:17852fc7-e091-4b82-95f4-4543d8860eea,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:30.854687 containerd[2094]: time="2026-04-17T23:36:30.854606171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv7c7,Uid:fd95d969-1b65-46c7-89e9-83f84fb0472f,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:30.917353 containerd[2094]: time="2026-04-17T23:36:30.915898288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:30.917353 containerd[2094]: time="2026-04-17T23:36:30.915979469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:30.917353 containerd[2094]: time="2026-04-17T23:36:30.916020745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:30.917353 containerd[2094]: time="2026-04-17T23:36:30.916145322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:30.920872 containerd[2094]: time="2026-04-17T23:36:30.920536602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:30.920872 containerd[2094]: time="2026-04-17T23:36:30.920634262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:30.920872 containerd[2094]: time="2026-04-17T23:36:30.920653603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:30.920872 containerd[2094]: time="2026-04-17T23:36:30.920773548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:30.932885 containerd[2094]: time="2026-04-17T23:36:30.932081060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:30.932885 containerd[2094]: time="2026-04-17T23:36:30.932507981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:30.932885 containerd[2094]: time="2026-04-17T23:36:30.932560210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:30.932885 containerd[2094]: time="2026-04-17T23:36:30.932788805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:31.015007 containerd[2094]: time="2026-04-17T23:36:31.014962701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ln5rr,Uid:3694d85c-3026-441e-a9ac-335fc3ba1b45,Namespace:kube-system,Attempt:0,} returns sandbox id \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\"" Apr 17 23:36:31.018330 containerd[2094]: time="2026-04-17T23:36:31.018294843Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:36:31.028091 containerd[2094]: time="2026-04-17T23:36:31.028049396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv7c7,Uid:fd95d969-1b65-46c7-89e9-83f84fb0472f,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa1ad9bbd6c053a988244e095cdab691b8e760f67a66170449fd89ab0e43f146\"" Apr 17 23:36:31.037560 containerd[2094]: time="2026-04-17T23:36:31.037493388Z" level=info msg="CreateContainer within sandbox \"aa1ad9bbd6c053a988244e095cdab691b8e760f67a66170449fd89ab0e43f146\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:36:31.054292 containerd[2094]: time="2026-04-17T23:36:31.054031557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7xdzh,Uid:17852fc7-e091-4b82-95f4-4543d8860eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\"" Apr 17 23:36:31.070535 containerd[2094]: time="2026-04-17T23:36:31.070405618Z" level=info msg="CreateContainer within sandbox \"aa1ad9bbd6c053a988244e095cdab691b8e760f67a66170449fd89ab0e43f146\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00c42a33d64bd3cef90e29ad977d8eaa626b1b70bae8f68f892cb0e55e0ddbf8\"" Apr 17 23:36:31.071291 containerd[2094]: time="2026-04-17T23:36:31.071095777Z" level=info msg="StartContainer for \"00c42a33d64bd3cef90e29ad977d8eaa626b1b70bae8f68f892cb0e55e0ddbf8\"" Apr 17 23:36:31.130284 containerd[2094]: time="2026-04-17T23:36:31.130186250Z" level=info msg="StartContainer for \"00c42a33d64bd3cef90e29ad977d8eaa626b1b70bae8f68f892cb0e55e0ddbf8\" returns successfully" Apr 17 23:36:31.502923 kubelet[3648]: I0417 23:36:31.502815 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lv7c7" podStartSLOduration=7.502792451 podStartE2EDuration="7.502792451s" podCreationTimestamp="2026-04-17 23:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:31.502655578 +0000 UTC m=+10.332657652" watchObservedRunningTime="2026-04-17 23:36:31.502792451 +0000 UTC m=+10.332794514" Apr 17 23:36:42.884760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197496734.mount: Deactivated successfully. Apr 17 23:36:45.443751 containerd[2094]: time="2026-04-17T23:36:45.443696081Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:45.447172 containerd[2094]: time="2026-04-17T23:36:45.447115529Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:36:45.486842 containerd[2094]: time="2026-04-17T23:36:45.484060244Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:45.489735 containerd[2094]: time="2026-04-17T23:36:45.489691594Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.471346573s" Apr 17 23:36:45.489735 containerd[2094]: time="2026-04-17T23:36:45.489737074Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:36:45.508273 containerd[2094]: time="2026-04-17T23:36:45.508233870Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:36:45.515789 containerd[2094]: time="2026-04-17T23:36:45.515748182Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:36:45.618990 containerd[2094]: time="2026-04-17T23:36:45.618934920Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\"" Apr 17 23:36:45.620114 containerd[2094]: time="2026-04-17T23:36:45.619978377Z" level=info msg="StartContainer for \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\"" Apr 17 23:36:45.794934 systemd[1]: run-containerd-runc-k8s.io-8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc-runc.sb2d4W.mount: Deactivated successfully. Apr 17 23:36:45.836009 containerd[2094]: time="2026-04-17T23:36:45.835917446Z" level=info msg="StartContainer for \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\" returns successfully" Apr 17 23:36:46.027444 containerd[2094]: time="2026-04-17T23:36:46.002832060Z" level=info msg="shim disconnected" id=8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc namespace=k8s.io Apr 17 23:36:46.027444 containerd[2094]: time="2026-04-17T23:36:46.027444388Z" level=warning msg="cleaning up after shim disconnected" id=8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc namespace=k8s.io Apr 17 23:36:46.027759 containerd[2094]: time="2026-04-17T23:36:46.027463433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:46.533657 containerd[2094]: time="2026-04-17T23:36:46.533616450Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:36:46.553092 containerd[2094]: time="2026-04-17T23:36:46.552932890Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\"" Apr 17 23:36:46.553791 containerd[2094]: time="2026-04-17T23:36:46.553755816Z" level=info msg="StartContainer for \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\"" Apr 17 23:36:46.615710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc-rootfs.mount: Deactivated successfully. Apr 17 23:36:46.632492 containerd[2094]: time="2026-04-17T23:36:46.631653630Z" level=info msg="StartContainer for \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\" returns successfully" Apr 17 23:36:46.647255 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:36:46.648828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:36:46.648920 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:36:46.657410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:36:46.681163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335-rootfs.mount: Deactivated successfully. Apr 17 23:36:46.696772 containerd[2094]: time="2026-04-17T23:36:46.696672024Z" level=info msg="shim disconnected" id=0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335 namespace=k8s.io Apr 17 23:36:46.697063 containerd[2094]: time="2026-04-17T23:36:46.696769214Z" level=warning msg="cleaning up after shim disconnected" id=0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335 namespace=k8s.io Apr 17 23:36:46.697063 containerd[2094]: time="2026-04-17T23:36:46.696810318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:46.705851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:36:46.730867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633394466.mount: Deactivated successfully. Apr 17 23:36:47.058485 systemd-resolved[1976]: Under memory pressure, flushing caches. Apr 17 23:36:47.058564 systemd-resolved[1976]: Flushed all caches. Apr 17 23:36:47.060234 systemd-journald[1565]: Under memory pressure, flushing caches. Apr 17 23:36:47.538218 containerd[2094]: time="2026-04-17T23:36:47.538154925Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:36:47.585218 containerd[2094]: time="2026-04-17T23:36:47.583246254Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\"" Apr 17 23:36:47.585218 containerd[2094]: time="2026-04-17T23:36:47.584902686Z" level=info msg="StartContainer for \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\"" Apr 17 23:36:47.701874 containerd[2094]: time="2026-04-17T23:36:47.701514114Z" level=info msg="StartContainer for \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\" returns successfully" Apr 17 23:36:47.729490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea-rootfs.mount: Deactivated successfully. Apr 17 23:36:47.736586 containerd[2094]: time="2026-04-17T23:36:47.736523653Z" level=info msg="shim disconnected" id=618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea namespace=k8s.io Apr 17 23:36:47.736586 containerd[2094]: time="2026-04-17T23:36:47.736583006Z" level=warning msg="cleaning up after shim disconnected" id=618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea namespace=k8s.io Apr 17 23:36:47.736800 containerd[2094]: time="2026-04-17T23:36:47.736594541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:48.546000 containerd[2094]: time="2026-04-17T23:36:48.545750964Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:36:48.565188 containerd[2094]: time="2026-04-17T23:36:48.565142882Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\"" Apr 17 23:36:48.568334 containerd[2094]: time="2026-04-17T23:36:48.566032623Z" level=info msg="StartContainer for \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\"" Apr 17 23:36:48.631524 containerd[2094]: time="2026-04-17T23:36:48.631480376Z" level=info msg="StartContainer for \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\" returns successfully" Apr 17 23:36:48.651689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473-rootfs.mount: Deactivated successfully. Apr 17 23:36:48.703714 containerd[2094]: time="2026-04-17T23:36:48.702226657Z" level=info msg="shim disconnected" id=2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473 namespace=k8s.io Apr 17 23:36:48.703714 containerd[2094]: time="2026-04-17T23:36:48.702288524Z" level=warning msg="cleaning up after shim disconnected" id=2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473 namespace=k8s.io Apr 17 23:36:48.703714 containerd[2094]: time="2026-04-17T23:36:48.702300651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:49.310389 containerd[2094]: time="2026-04-17T23:36:49.310340890Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:49.334845 containerd[2094]: time="2026-04-17T23:36:49.334781016Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:36:49.336759 containerd[2094]: time="2026-04-17T23:36:49.336690315Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:49.338539 containerd[2094]: time="2026-04-17T23:36:49.338391570Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.83011404s" Apr 17 23:36:49.338539 containerd[2094]: time="2026-04-17T23:36:49.338440215Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:36:49.342911 containerd[2094]: time="2026-04-17T23:36:49.342858398Z" level=info msg="CreateContainer within sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:36:49.359679 containerd[2094]: time="2026-04-17T23:36:49.359610460Z" level=info msg="CreateContainer within sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\"" Apr 17 23:36:49.361418 containerd[2094]: time="2026-04-17T23:36:49.360479036Z" level=info msg="StartContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\"" Apr 17 23:36:49.417230 containerd[2094]: time="2026-04-17T23:36:49.417168658Z" level=info msg="StartContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" returns successfully" Apr 17 23:36:49.559547 containerd[2094]: time="2026-04-17T23:36:49.558850769Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:36:49.586284 containerd[2094]: time="2026-04-17T23:36:49.584405130Z" level=info msg="CreateContainer within sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\"" Apr 17 23:36:49.587454 containerd[2094]: time="2026-04-17T23:36:49.586625604Z" level=info msg="StartContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\"" Apr 17 23:36:49.669108 kubelet[3648]: I0417 23:36:49.662097 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7xdzh" podStartSLOduration=6.377568258 podStartE2EDuration="24.659953126s" podCreationTimestamp="2026-04-17 23:36:25 +0000 UTC" firstStartedPulling="2026-04-17 23:36:31.057068599 +0000 UTC m=+9.887070641" lastFinishedPulling="2026-04-17 23:36:49.339453468 +0000 UTC m=+28.169455509" observedRunningTime="2026-04-17 23:36:49.564437038 +0000 UTC m=+28.394439103" watchObservedRunningTime="2026-04-17 23:36:49.659953126 +0000 UTC m=+28.489955188" Apr 17 23:36:49.771507 containerd[2094]: time="2026-04-17T23:36:49.769502101Z" level=info msg="StartContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" returns successfully" Apr 17 23:36:50.177624 kubelet[3648]: I0417 23:36:50.177590 3648 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:36:50.371994 kubelet[3648]: I0417 23:36:50.371813 3648 status_manager.go:895] "Failed to get status for pod" podUID="afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d" pod="kube-system/coredns-674b8bbfcf-wtgkq" err="pods \"coredns-674b8bbfcf-wtgkq\" is forbidden: User \"system:node:ip-172-31-25-61\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-61' and this object" Apr 17 23:36:50.372772 kubelet[3648]: E0417 23:36:50.372736 3648 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-25-61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-61' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 17 23:36:50.400678 kubelet[3648]: I0417 23:36:50.400505 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8qz\" (UniqueName: \"kubernetes.io/projected/2a4a3445-824d-4115-8208-a6d482807c3c-kube-api-access-pb8qz\") pod \"coredns-674b8bbfcf-ppsmh\" (UID: \"2a4a3445-824d-4115-8208-a6d482807c3c\") " pod="kube-system/coredns-674b8bbfcf-ppsmh" Apr 17 23:36:50.400678 kubelet[3648]: I0417 23:36:50.400557 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d-config-volume\") pod \"coredns-674b8bbfcf-wtgkq\" (UID: \"afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d\") " pod="kube-system/coredns-674b8bbfcf-wtgkq" Apr 17 23:36:50.400678 kubelet[3648]: I0417 23:36:50.400586 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs98x\" (UniqueName: \"kubernetes.io/projected/afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d-kube-api-access-xs98x\") pod \"coredns-674b8bbfcf-wtgkq\" (UID: \"afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d\") " pod="kube-system/coredns-674b8bbfcf-wtgkq" Apr 17 23:36:50.400678 kubelet[3648]: I0417 23:36:50.400608 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4a3445-824d-4115-8208-a6d482807c3c-config-volume\") pod \"coredns-674b8bbfcf-ppsmh\" (UID: \"2a4a3445-824d-4115-8208-a6d482807c3c\") " pod="kube-system/coredns-674b8bbfcf-ppsmh" Apr 17 23:36:51.504035 kubelet[3648]: E0417 23:36:51.503982 3648 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 17 23:36:51.504850 kubelet[3648]: E0417 23:36:51.504110 3648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d-config-volume podName:afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d nodeName:}" failed. No retries permitted until 2026-04-17 23:36:52.004079454 +0000 UTC m=+30.834081501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d-config-volume") pod "coredns-674b8bbfcf-wtgkq" (UID: "afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d") : failed to sync configmap cache: timed out waiting for the condition Apr 17 23:36:51.504850 kubelet[3648]: E0417 23:36:51.503982 3648 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 17 23:36:51.504850 kubelet[3648]: E0417 23:36:51.504445 3648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a4a3445-824d-4115-8208-a6d482807c3c-config-volume podName:2a4a3445-824d-4115-8208-a6d482807c3c nodeName:}" failed. No retries permitted until 2026-04-17 23:36:52.004418258 +0000 UTC m=+30.834420302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a4a3445-824d-4115-8208-a6d482807c3c-config-volume") pod "coredns-674b8bbfcf-ppsmh" (UID: "2a4a3445-824d-4115-8208-a6d482807c3c") : failed to sync configmap cache: timed out waiting for the condition Apr 17 23:36:52.154211 containerd[2094]: time="2026-04-17T23:36:52.154131534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtgkq,Uid:afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:52.165083 containerd[2094]: time="2026-04-17T23:36:52.165036339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ppsmh,Uid:2a4a3445-824d-4115-8208-a6d482807c3c,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:55.541273 systemd-networkd[1649]: cilium_host: Link UP Apr 17 23:36:55.543900 (udev-worker)[4442]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:55.544157 (udev-worker)[4444]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:55.546760 systemd-networkd[1649]: cilium_net: Link UP Apr 17 23:36:55.547189 systemd-networkd[1649]: cilium_net: Gained carrier Apr 17 23:36:55.547578 systemd-networkd[1649]: cilium_host: Gained carrier Apr 17 23:36:55.778694 systemd-networkd[1649]: cilium_net: Gained IPv6LL Apr 17 23:36:55.781236 (udev-worker)[4480]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:55.790721 systemd-networkd[1649]: cilium_vxlan: Link UP Apr 17 23:36:55.791264 systemd-networkd[1649]: cilium_vxlan: Gained carrier Apr 17 23:36:56.146496 systemd-networkd[1649]: cilium_host: Gained IPv6LL Apr 17 23:36:56.979371 systemd-networkd[1649]: cilium_vxlan: Gained IPv6LL Apr 17 23:36:57.315973 kernel: NET: Registered PF_ALG protocol family Apr 17 23:36:58.175424 systemd-networkd[1649]: lxc_health: Link UP Apr 17 23:36:58.180171 systemd-networkd[1649]: lxc_health: Gained carrier Apr 17 23:36:58.778630 systemd-networkd[1649]: lxcc68f2df39d38: Link UP Apr 17 23:36:58.781582 kernel: eth0: renamed from tmp7b3b0 Apr 17 23:36:58.782734 (udev-worker)[4813]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:58.784521 systemd-networkd[1649]: lxc7d7eb6448445: Link UP Apr 17 23:36:58.788618 systemd-networkd[1649]: lxcc68f2df39d38: Gained carrier Apr 17 23:36:58.795217 kernel: eth0: renamed from tmp53038 Apr 17 23:36:58.806079 systemd-networkd[1649]: lxc7d7eb6448445: Gained carrier Apr 17 23:36:58.888728 kubelet[3648]: I0417 23:36:58.888661 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ln5rr" podStartSLOduration=20.398424062 podStartE2EDuration="34.888641661s" podCreationTimestamp="2026-04-17 23:36:24 +0000 UTC" firstStartedPulling="2026-04-17 23:36:31.017715569 +0000 UTC m=+9.847717626" lastFinishedPulling="2026-04-17 23:36:45.507933164 +0000 UTC m=+24.337935225" observedRunningTime="2026-04-17 23:36:50.6548908 +0000 UTC m=+29.484892863" watchObservedRunningTime="2026-04-17 23:36:58.888641661 +0000 UTC m=+37.718643727" Apr 17 23:36:59.603511 systemd-networkd[1649]: lxc_health: Gained IPv6LL Apr 17 23:36:59.922413 systemd-networkd[1649]: lxcc68f2df39d38: Gained IPv6LL Apr 17 23:37:00.628400 systemd-networkd[1649]: lxc7d7eb6448445: Gained IPv6LL Apr 17 23:37:03.177152 containerd[2094]: time="2026-04-17T23:37:03.175151039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:03.177152 containerd[2094]: time="2026-04-17T23:37:03.175262280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:03.177152 containerd[2094]: time="2026-04-17T23:37:03.175288046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:03.177152 containerd[2094]: time="2026-04-17T23:37:03.175402629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:03.238640 containerd[2094]: time="2026-04-17T23:37:03.236295011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:03.238640 containerd[2094]: time="2026-04-17T23:37:03.236374845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:03.238640 containerd[2094]: time="2026-04-17T23:37:03.236399141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:03.238640 containerd[2094]: time="2026-04-17T23:37:03.236541106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:03.380429 containerd[2094]: time="2026-04-17T23:37:03.380383719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ppsmh,Uid:2a4a3445-824d-4115-8208-a6d482807c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b3b01f2d125fd2f57209a44d0e51033d7070a1cc945e5256063f5abdd5223e3\"" Apr 17 23:37:03.389925 containerd[2094]: time="2026-04-17T23:37:03.389869152Z" level=info msg="CreateContainer within sandbox \"7b3b01f2d125fd2f57209a44d0e51033d7070a1cc945e5256063f5abdd5223e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:37:03.415327 containerd[2094]: time="2026-04-17T23:37:03.415269901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtgkq,Uid:afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53038614c3350e8f7e41c46e55ec085be193442bb0fd615b2a9ab93a1556e37d\"" Apr 17 23:37:03.423216 containerd[2094]: time="2026-04-17T23:37:03.423170681Z" level=info msg="CreateContainer within sandbox \"53038614c3350e8f7e41c46e55ec085be193442bb0fd615b2a9ab93a1556e37d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:37:03.444056 containerd[2094]: time="2026-04-17T23:37:03.444005800Z" level=info msg="CreateContainer within sandbox \"7b3b01f2d125fd2f57209a44d0e51033d7070a1cc945e5256063f5abdd5223e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcc4f00646334a1761974cccf6b7e1324cc1c4fcb93655b15cf506c234ab74c8\"" Apr 17 23:37:03.445232 containerd[2094]: time="2026-04-17T23:37:03.444948403Z" level=info msg="StartContainer for \"bcc4f00646334a1761974cccf6b7e1324cc1c4fcb93655b15cf506c234ab74c8\"" Apr 17 23:37:03.456862 containerd[2094]: time="2026-04-17T23:37:03.456818337Z" level=info msg="CreateContainer within sandbox \"53038614c3350e8f7e41c46e55ec085be193442bb0fd615b2a9ab93a1556e37d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"776505fc3d9849a829de051265c96e004e07a3d3366cdd5e70ba31f27283f535\"" Apr 17 23:37:03.458387 containerd[2094]: time="2026-04-17T23:37:03.457496601Z" level=info msg="StartContainer for \"776505fc3d9849a829de051265c96e004e07a3d3366cdd5e70ba31f27283f535\"" Apr 17 23:37:03.561392 containerd[2094]: time="2026-04-17T23:37:03.561344475Z" level=info msg="StartContainer for \"bcc4f00646334a1761974cccf6b7e1324cc1c4fcb93655b15cf506c234ab74c8\" returns successfully" Apr 17 23:37:03.561731 containerd[2094]: time="2026-04-17T23:37:03.561637505Z" level=info msg="StartContainer for \"776505fc3d9849a829de051265c96e004e07a3d3366cdd5e70ba31f27283f535\" returns successfully" Apr 17 23:37:03.614725 kubelet[3648]: I0417 23:37:03.613567 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ppsmh" podStartSLOduration=38.613546201 podStartE2EDuration="38.613546201s" podCreationTimestamp="2026-04-17 23:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:03.612439045 +0000 UTC m=+42.442441118" watchObservedRunningTime="2026-04-17 23:37:03.613546201 +0000 UTC m=+42.443548265" Apr 17 23:37:03.629662 kubelet[3648]: I0417 23:37:03.629604 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wtgkq" podStartSLOduration=38.629582325 podStartE2EDuration="38.629582325s" podCreationTimestamp="2026-04-17 23:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:03.629328277 +0000 UTC m=+42.459330341" watchObservedRunningTime="2026-04-17 23:37:03.629582325 +0000 UTC m=+42.459584390" Apr 17 23:37:04.187103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249104032.mount: Deactivated successfully. Apr 17 23:37:05.334490 ntpd[2049]: Listen normally on 6 cilium_host 192.168.0.119:123 Apr 17 23:37:05.334582 ntpd[2049]: Listen normally on 7 cilium_net [fe80::1b:1eff:fe30:5f3%4]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 6 cilium_host 192.168.0.119:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 7 cilium_net [fe80::1b:1eff:fe30:5f3%4]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 8 cilium_host [fe80::7024:3bff:fe10:6fb%5]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 9 cilium_vxlan [fe80::8022:41ff:fee0:cf24%6]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 10 lxc_health [fe80::2481:7bff:feed:9a2e%8]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 11 lxc7d7eb6448445 [fe80::2800:96ff:fe81:841f%10]:123 Apr 17 23:37:05.335807 ntpd[2049]: 17 Apr 23:37:05 ntpd[2049]: Listen normally on 12 lxcc68f2df39d38 [fe80::7ca6:4aff:fe14:f2a0%12]:123 Apr 17 23:37:05.334639 ntpd[2049]: Listen normally on 8 cilium_host [fe80::7024:3bff:fe10:6fb%5]:123 Apr 17 23:37:05.334687 ntpd[2049]: Listen normally on 9 cilium_vxlan [fe80::8022:41ff:fee0:cf24%6]:123 Apr 17 23:37:05.334739 ntpd[2049]: Listen normally on 10 lxc_health [fe80::2481:7bff:feed:9a2e%8]:123 Apr 17 23:37:05.334779 ntpd[2049]: Listen normally on 11 lxc7d7eb6448445 [fe80::2800:96ff:fe81:841f%10]:123 Apr 17 23:37:05.334817 ntpd[2049]: Listen normally on 12 lxcc68f2df39d38 [fe80::7ca6:4aff:fe14:f2a0%12]:123 Apr 17 23:37:06.056591 systemd[1]: Started sshd@7-172.31.25.61:22-20.229.252.112:58584.service - OpenSSH per-connection server daemon (20.229.252.112:58584). Apr 17 23:37:07.099939 sshd[5009]: Accepted publickey for core from 20.229.252.112 port 58584 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:07.101073 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:07.119324 systemd-logind[2063]: New session 8 of user core. Apr 17 23:37:07.123690 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:37:08.564958 sshd[5009]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:08.571089 systemd[1]: sshd@7-172.31.25.61:22-20.229.252.112:58584.service: Deactivated successfully. Apr 17 23:37:08.575339 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:37:08.576478 systemd-logind[2063]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:37:08.577883 systemd-logind[2063]: Removed session 8. Apr 17 23:37:13.736986 systemd[1]: Started sshd@8-172.31.25.61:22-20.229.252.112:58598.service - OpenSSH per-connection server daemon (20.229.252.112:58598). Apr 17 23:37:14.761158 sshd[5033]: Accepted publickey for core from 20.229.252.112 port 58598 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:14.762813 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:14.768457 systemd-logind[2063]: New session 9 of user core. Apr 17 23:37:14.773669 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:37:15.549021 sshd[5033]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:15.554291 systemd-logind[2063]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:37:15.556323 systemd[1]: sshd@8-172.31.25.61:22-20.229.252.112:58598.service: Deactivated successfully. Apr 17 23:37:15.560597 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:37:15.561433 systemd-logind[2063]: Removed session 9. Apr 17 23:37:20.721636 systemd[1]: Started sshd@9-172.31.25.61:22-20.229.252.112:60582.service - OpenSSH per-connection server daemon (20.229.252.112:60582). Apr 17 23:37:21.743024 sshd[5055]: Accepted publickey for core from 20.229.252.112 port 60582 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:21.744812 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:21.751423 systemd-logind[2063]: New session 10 of user core. Apr 17 23:37:21.758812 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:37:22.527983 sshd[5055]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:22.532267 systemd[1]: sshd@9-172.31.25.61:22-20.229.252.112:60582.service: Deactivated successfully. Apr 17 23:37:22.536895 systemd-logind[2063]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:37:22.537933 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:37:22.539607 systemd-logind[2063]: Removed session 10. Apr 17 23:37:27.701926 systemd[1]: Started sshd@10-172.31.25.61:22-20.229.252.112:54392.service - OpenSSH per-connection server daemon (20.229.252.112:54392). Apr 17 23:37:28.721651 sshd[5072]: Accepted publickey for core from 20.229.252.112 port 54392 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:28.723328 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:28.728722 systemd-logind[2063]: New session 11 of user core. Apr 17 23:37:28.731494 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:37:29.513072 sshd[5072]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:29.518910 systemd[1]: sshd@10-172.31.25.61:22-20.229.252.112:54392.service: Deactivated successfully. Apr 17 23:37:29.523642 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:37:29.523696 systemd-logind[2063]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:37:29.526666 systemd-logind[2063]: Removed session 11. Apr 17 23:37:29.688031 systemd[1]: Started sshd@11-172.31.25.61:22-20.229.252.112:54398.service - OpenSSH per-connection server daemon (20.229.252.112:54398). Apr 17 23:37:30.730634 sshd[5087]: Accepted publickey for core from 20.229.252.112 port 54398 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:30.731383 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:30.737540 systemd-logind[2063]: New session 12 of user core. Apr 17 23:37:30.743684 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:37:31.598032 sshd[5087]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:31.608152 systemd[1]: sshd@11-172.31.25.61:22-20.229.252.112:54398.service: Deactivated successfully. Apr 17 23:37:31.612319 systemd-logind[2063]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:37:31.612718 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:37:31.615137 systemd-logind[2063]: Removed session 12. Apr 17 23:37:31.769655 systemd[1]: Started sshd@12-172.31.25.61:22-20.229.252.112:54400.service - OpenSSH per-connection server daemon (20.229.252.112:54400). Apr 17 23:37:32.795471 sshd[5099]: Accepted publickey for core from 20.229.252.112 port 54400 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:32.798225 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:32.803450 systemd-logind[2063]: New session 13 of user core. Apr 17 23:37:32.809691 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:37:33.586177 sshd[5099]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:33.589768 systemd[1]: sshd@12-172.31.25.61:22-20.229.252.112:54400.service: Deactivated successfully. Apr 17 23:37:33.594015 systemd-logind[2063]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:37:33.595603 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:37:33.598141 systemd-logind[2063]: Removed session 13. Apr 17 23:37:38.759789 systemd[1]: Started sshd@13-172.31.25.61:22-20.229.252.112:38902.service - OpenSSH per-connection server daemon (20.229.252.112:38902). Apr 17 23:37:39.774790 sshd[5116]: Accepted publickey for core from 20.229.252.112 port 38902 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:39.776444 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:39.781698 systemd-logind[2063]: New session 14 of user core. Apr 17 23:37:39.784556 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:37:40.553748 sshd[5116]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:40.559488 systemd[1]: sshd@13-172.31.25.61:22-20.229.252.112:38902.service: Deactivated successfully. Apr 17 23:37:40.559927 systemd-logind[2063]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:37:40.563657 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:37:40.565005 systemd-logind[2063]: Removed session 14. Apr 17 23:37:40.726554 systemd[1]: Started sshd@14-172.31.25.61:22-20.229.252.112:38908.service - OpenSSH per-connection server daemon (20.229.252.112:38908). Apr 17 23:37:41.773051 sshd[5131]: Accepted publickey for core from 20.229.252.112 port 38908 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:41.774624 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:41.780472 systemd-logind[2063]: New session 15 of user core. Apr 17 23:37:41.783519 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:37:43.298714 sshd[5131]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:43.309254 systemd-logind[2063]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:37:43.310279 systemd[1]: sshd@14-172.31.25.61:22-20.229.252.112:38908.service: Deactivated successfully. Apr 17 23:37:43.315099 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:37:43.316370 systemd-logind[2063]: Removed session 15. Apr 17 23:37:43.471637 systemd[1]: Started sshd@15-172.31.25.61:22-20.229.252.112:38916.service - OpenSSH per-connection server daemon (20.229.252.112:38916). Apr 17 23:37:44.489051 sshd[5143]: Accepted publickey for core from 20.229.252.112 port 38916 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:44.489926 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:44.495914 systemd-logind[2063]: New session 16 of user core. Apr 17 23:37:44.501595 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:37:45.753508 sshd[5143]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:45.761704 systemd[1]: sshd@15-172.31.25.61:22-20.229.252.112:38916.service: Deactivated successfully. Apr 17 23:37:45.767376 systemd-logind[2063]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:37:45.767820 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:37:45.770440 systemd-logind[2063]: Removed session 16. Apr 17 23:37:45.925534 systemd[1]: Started sshd@16-172.31.25.61:22-20.229.252.112:47202.service - OpenSSH per-connection server daemon (20.229.252.112:47202). Apr 17 23:37:46.954816 sshd[5162]: Accepted publickey for core from 20.229.252.112 port 47202 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:46.956476 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:46.962344 systemd-logind[2063]: New session 17 of user core. Apr 17 23:37:46.966582 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:37:47.871884 sshd[5162]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:47.875101 systemd[1]: sshd@16-172.31.25.61:22-20.229.252.112:47202.service: Deactivated successfully. Apr 17 23:37:47.881501 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:37:47.882499 systemd-logind[2063]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:37:47.883537 systemd-logind[2063]: Removed session 17. Apr 17 23:37:48.045543 systemd[1]: Started sshd@17-172.31.25.61:22-20.229.252.112:47218.service - OpenSSH per-connection server daemon (20.229.252.112:47218). Apr 17 23:37:49.058837 sshd[5174]: Accepted publickey for core from 20.229.252.112 port 47218 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:49.060494 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:49.065765 systemd-logind[2063]: New session 18 of user core. Apr 17 23:37:49.072598 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:37:49.834455 sshd[5174]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:49.837786 systemd[1]: sshd@17-172.31.25.61:22-20.229.252.112:47218.service: Deactivated successfully. Apr 17 23:37:49.842790 systemd-logind[2063]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:37:49.844014 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:37:49.846316 systemd-logind[2063]: Removed session 18. Apr 17 23:37:55.009170 systemd[1]: Started sshd@18-172.31.25.61:22-20.229.252.112:47230.service - OpenSSH per-connection server daemon (20.229.252.112:47230). Apr 17 23:37:56.048818 sshd[5190]: Accepted publickey for core from 20.229.252.112 port 47230 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:56.051429 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:56.058612 systemd-logind[2063]: New session 19 of user core. Apr 17 23:37:56.066578 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:37:56.834647 sshd[5190]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:56.838342 systemd[1]: sshd@18-172.31.25.61:22-20.229.252.112:47230.service: Deactivated successfully. Apr 17 23:37:56.844255 systemd-logind[2063]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:37:56.844991 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:37:56.846244 systemd-logind[2063]: Removed session 19. Apr 17 23:38:02.021608 systemd[1]: Started sshd@19-172.31.25.61:22-20.229.252.112:36680.service - OpenSSH per-connection server daemon (20.229.252.112:36680). Apr 17 23:38:03.067724 sshd[5205]: Accepted publickey for core from 20.229.252.112 port 36680 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:03.069416 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:03.074661 systemd-logind[2063]: New session 20 of user core. Apr 17 23:38:03.077628 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:38:03.842949 sshd[5205]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:03.847464 systemd[1]: sshd@19-172.31.25.61:22-20.229.252.112:36680.service: Deactivated successfully. Apr 17 23:38:03.852807 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:38:03.854022 systemd-logind[2063]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:38:03.855134 systemd-logind[2063]: Removed session 20. Apr 17 23:38:04.014889 systemd[1]: Started sshd@20-172.31.25.61:22-20.229.252.112:36688.service - OpenSSH per-connection server daemon (20.229.252.112:36688). Apr 17 23:38:05.033320 sshd[5219]: Accepted publickey for core from 20.229.252.112 port 36688 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:05.034887 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:05.040654 systemd-logind[2063]: New session 21 of user core. Apr 17 23:38:05.044503 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:38:07.360024 containerd[2094]: time="2026-04-17T23:38:07.359504402Z" level=info msg="StopContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" with timeout 30 (s)" Apr 17 23:38:07.364509 containerd[2094]: time="2026-04-17T23:38:07.362714753Z" level=info msg="Stop container \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" with signal terminated" Apr 17 23:38:07.398967 systemd[1]: run-containerd-runc-k8s.io-a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6-runc.ZGITpS.mount: Deactivated successfully. Apr 17 23:38:07.431602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c-rootfs.mount: Deactivated successfully. Apr 17 23:38:07.453281 containerd[2094]: time="2026-04-17T23:38:07.452372129Z" level=info msg="StopContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" with timeout 2 (s)" Apr 17 23:38:07.453873 containerd[2094]: time="2026-04-17T23:38:07.453702620Z" level=info msg="Stop container \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" with signal terminated" Apr 17 23:38:07.454471 containerd[2094]: time="2026-04-17T23:38:07.454371129Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:38:07.455063 containerd[2094]: time="2026-04-17T23:38:07.455014822Z" level=info msg="shim disconnected" id=0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c namespace=k8s.io Apr 17 23:38:07.455377 containerd[2094]: time="2026-04-17T23:38:07.455308257Z" level=warning msg="cleaning up after shim disconnected" id=0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c namespace=k8s.io Apr 17 23:38:07.455377 containerd[2094]: time="2026-04-17T23:38:07.455335040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:07.464585 systemd-networkd[1649]: lxc_health: Link DOWN Apr 17 23:38:07.464693 systemd-networkd[1649]: lxc_health: Lost carrier Apr 17 23:38:07.510425 containerd[2094]: time="2026-04-17T23:38:07.510381742Z" level=info msg="StopContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" returns successfully" Apr 17 23:38:07.518130 containerd[2094]: time="2026-04-17T23:38:07.517892109Z" level=info msg="StopPodSandbox for \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\"" Apr 17 23:38:07.518130 containerd[2094]: time="2026-04-17T23:38:07.517973939Z" level=info msg="Container to stop \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.525073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01-shm.mount: Deactivated successfully. Apr 17 23:38:07.548631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6-rootfs.mount: Deactivated successfully. Apr 17 23:38:07.567453 containerd[2094]: time="2026-04-17T23:38:07.567282124Z" level=info msg="shim disconnected" id=a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6 namespace=k8s.io Apr 17 23:38:07.567453 containerd[2094]: time="2026-04-17T23:38:07.567342810Z" level=warning msg="cleaning up after shim disconnected" id=a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6 namespace=k8s.io Apr 17 23:38:07.567453 containerd[2094]: time="2026-04-17T23:38:07.567354851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:07.568700 containerd[2094]: time="2026-04-17T23:38:07.568555559Z" level=info msg="shim disconnected" id=7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01 namespace=k8s.io Apr 17 23:38:07.568700 containerd[2094]: time="2026-04-17T23:38:07.568605739Z" level=warning msg="cleaning up after shim disconnected" id=7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01 namespace=k8s.io Apr 17 23:38:07.568700 containerd[2094]: time="2026-04-17T23:38:07.568617207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:07.593900 containerd[2094]: time="2026-04-17T23:38:07.593856904Z" level=info msg="TearDown network for sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" successfully" Apr 17 23:38:07.593900 containerd[2094]: time="2026-04-17T23:38:07.593893141Z" level=info msg="StopPodSandbox for \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" returns successfully" Apr 17 23:38:07.597697 containerd[2094]: time="2026-04-17T23:38:07.597644160Z" level=info msg="StopContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" returns successfully" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.597996474Z" level=info msg="StopPodSandbox for \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\"" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.598035153Z" level=info msg="Container to stop \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.598052647Z" level=info msg="Container to stop \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.598066559Z" level=info msg="Container to stop \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.598079586Z" level=info msg="Container to stop \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.598389 containerd[2094]: time="2026-04-17T23:38:07.598093473Z" level=info msg="Container to stop \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:07.644050 containerd[2094]: time="2026-04-17T23:38:07.642712170Z" level=info msg="shim disconnected" id=3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3 namespace=k8s.io Apr 17 23:38:07.644050 containerd[2094]: time="2026-04-17T23:38:07.642767016Z" level=warning msg="cleaning up after shim disconnected" id=3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3 namespace=k8s.io Apr 17 23:38:07.644050 containerd[2094]: time="2026-04-17T23:38:07.642777999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:07.659525 containerd[2094]: time="2026-04-17T23:38:07.659482899Z" level=info msg="TearDown network for sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" successfully" Apr 17 23:38:07.659681 containerd[2094]: time="2026-04-17T23:38:07.659653355Z" level=info msg="StopPodSandbox for \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" returns successfully" Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719628 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-bpf-maps\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719692 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cni-path\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719711 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-hostproc\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719725 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-net\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719760 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdjt\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-kube-api-access-lxdjt\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.720240 kubelet[3648]: I0417 23:38:07.719783 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-lib-modules\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719802 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-xtables-lock\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719823 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-run\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719843 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-etc-cni-netd\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719868 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-hubble-tls\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719890 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffz8h\" (UniqueName: \"kubernetes.io/projected/17852fc7-e091-4b82-95f4-4543d8860eea-kube-api-access-ffz8h\") pod \"17852fc7-e091-4b82-95f4-4543d8860eea\" (UID: \"17852fc7-e091-4b82-95f4-4543d8860eea\") " Apr 17 23:38:07.721000 kubelet[3648]: I0417 23:38:07.719904 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-cgroup\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721392 kubelet[3648]: I0417 23:38:07.719920 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17852fc7-e091-4b82-95f4-4543d8860eea-cilium-config-path\") pod \"17852fc7-e091-4b82-95f4-4543d8860eea\" (UID: \"17852fc7-e091-4b82-95f4-4543d8860eea\") " Apr 17 23:38:07.721392 kubelet[3648]: I0417 23:38:07.719936 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-kernel\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721392 kubelet[3648]: I0417 23:38:07.719963 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-config-path\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.721392 kubelet[3648]: I0417 23:38:07.719980 3648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3694d85c-3026-441e-a9ac-335fc3ba1b45-clustermesh-secrets\") pod \"3694d85c-3026-441e-a9ac-335fc3ba1b45\" (UID: \"3694d85c-3026-441e-a9ac-335fc3ba1b45\") " Apr 17 23:38:07.729574 kubelet[3648]: I0417 23:38:07.729518 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cni-path" (OuterVolumeSpecName: "cni-path") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.729726 kubelet[3648]: I0417 23:38:07.729608 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.729726 kubelet[3648]: I0417 23:38:07.729719 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3694d85c-3026-441e-a9ac-335fc3ba1b45-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:38:07.729833 kubelet[3648]: I0417 23:38:07.729757 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.729833 kubelet[3648]: I0417 23:38:07.729779 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.731214 kubelet[3648]: I0417 23:38:07.727357 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-hostproc" (OuterVolumeSpecName: "hostproc") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.732443 kubelet[3648]: I0417 23:38:07.732396 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.732443 kubelet[3648]: I0417 23:38:07.732440 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.732596 kubelet[3648]: I0417 23:38:07.732461 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.734338 kubelet[3648]: I0417 23:38:07.734099 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.735080 kubelet[3648]: I0417 23:38:07.734960 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-kube-api-access-lxdjt" (OuterVolumeSpecName: "kube-api-access-lxdjt") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "kube-api-access-lxdjt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:38:07.737980 kubelet[3648]: I0417 23:38:07.737944 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:38:07.738055 kubelet[3648]: I0417 23:38:07.738032 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17852fc7-e091-4b82-95f4-4543d8860eea-kube-api-access-ffz8h" (OuterVolumeSpecName: "kube-api-access-ffz8h") pod "17852fc7-e091-4b82-95f4-4543d8860eea" (UID: "17852fc7-e091-4b82-95f4-4543d8860eea"). InnerVolumeSpecName "kube-api-access-ffz8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:38:07.738367 kubelet[3648]: I0417 23:38:07.738100 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:38:07.738549 kubelet[3648]: I0417 23:38:07.738529 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3694d85c-3026-441e-a9ac-335fc3ba1b45" (UID: "3694d85c-3026-441e-a9ac-335fc3ba1b45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:07.740362 kubelet[3648]: I0417 23:38:07.740242 3648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17852fc7-e091-4b82-95f4-4543d8860eea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17852fc7-e091-4b82-95f4-4543d8860eea" (UID: "17852fc7-e091-4b82-95f4-4543d8860eea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:07.810219 kubelet[3648]: I0417 23:38:07.810139 3648 scope.go:117] "RemoveContainer" containerID="a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6" Apr 17 23:38:07.821558 kubelet[3648]: I0417 23:38:07.821517 3648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-kernel\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821558 kubelet[3648]: I0417 23:38:07.821549 3648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-config-path\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821558 kubelet[3648]: I0417 23:38:07.821561 3648 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3694d85c-3026-441e-a9ac-335fc3ba1b45-clustermesh-secrets\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821572 3648 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-bpf-maps\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821583 3648 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cni-path\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821594 3648 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-hostproc\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821603 3648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-host-proc-sys-net\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821614 3648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lxdjt\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-kube-api-access-lxdjt\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821625 3648 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-lib-modules\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821636 3648 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-xtables-lock\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.821787 kubelet[3648]: I0417 23:38:07.821646 3648 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-run\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.822426 kubelet[3648]: I0417 23:38:07.821657 3648 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-etc-cni-netd\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.822426 kubelet[3648]: I0417 23:38:07.821670 3648 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3694d85c-3026-441e-a9ac-335fc3ba1b45-hubble-tls\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.822426 kubelet[3648]: I0417 23:38:07.821681 3648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffz8h\" (UniqueName: \"kubernetes.io/projected/17852fc7-e091-4b82-95f4-4543d8860eea-kube-api-access-ffz8h\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.822426 kubelet[3648]: I0417 23:38:07.821693 3648 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3694d85c-3026-441e-a9ac-335fc3ba1b45-cilium-cgroup\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.822426 kubelet[3648]: I0417 23:38:07.821706 3648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17852fc7-e091-4b82-95f4-4543d8860eea-cilium-config-path\") on node \"ip-172-31-25-61\" DevicePath \"\"" Apr 17 23:38:07.832211 containerd[2094]: time="2026-04-17T23:38:07.830884214Z" level=info msg="RemoveContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\"" Apr 17 23:38:07.842452 containerd[2094]: time="2026-04-17T23:38:07.842408522Z" level=info msg="RemoveContainer for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" returns successfully" Apr 17 23:38:07.842737 kubelet[3648]: I0417 23:38:07.842712 3648 scope.go:117] "RemoveContainer" containerID="2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473" Apr 17 23:38:07.843927 containerd[2094]: time="2026-04-17T23:38:07.843896377Z" level=info msg="RemoveContainer for \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\"" Apr 17 23:38:07.850220 containerd[2094]: time="2026-04-17T23:38:07.849931377Z" level=info msg="RemoveContainer for \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\" returns successfully" Apr 17 23:38:07.850333 kubelet[3648]: I0417 23:38:07.850222 3648 scope.go:117] "RemoveContainer" containerID="618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea" Apr 17 23:38:07.851494 containerd[2094]: time="2026-04-17T23:38:07.851466005Z" level=info msg="RemoveContainer for \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\"" Apr 17 23:38:07.856555 containerd[2094]: time="2026-04-17T23:38:07.856512744Z" level=info msg="RemoveContainer for \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\" returns successfully" Apr 17 23:38:07.856752 kubelet[3648]: I0417 23:38:07.856730 3648 scope.go:117] "RemoveContainer" containerID="0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335" Apr 17 23:38:07.857886 containerd[2094]: time="2026-04-17T23:38:07.857860495Z" level=info msg="RemoveContainer for \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\"" Apr 17 23:38:07.863153 containerd[2094]: time="2026-04-17T23:38:07.863117157Z" level=info msg="RemoveContainer for \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\" returns successfully" Apr 17 23:38:07.863436 kubelet[3648]: I0417 23:38:07.863412 3648 scope.go:117] "RemoveContainer" containerID="8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc" Apr 17 23:38:07.864606 containerd[2094]: time="2026-04-17T23:38:07.864559478Z" level=info msg="RemoveContainer for \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\"" Apr 17 23:38:07.869932 containerd[2094]: time="2026-04-17T23:38:07.869895604Z" level=info msg="RemoveContainer for \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\" returns successfully" Apr 17 23:38:07.870349 kubelet[3648]: I0417 23:38:07.870322 3648 scope.go:117] "RemoveContainer" containerID="a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6" Apr 17 23:38:07.883755 containerd[2094]: time="2026-04-17T23:38:07.875563239Z" level=error msg="ContainerStatus for \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\": not found" Apr 17 23:38:07.895413 kubelet[3648]: E0417 23:38:07.894757 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\": not found" containerID="a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6" Apr 17 23:38:07.908429 kubelet[3648]: I0417 23:38:07.894833 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6"} err="failed to get container status \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7873b9299ecbf50932d33fe0a39296f13a394c680facff2eb9f63e289756af6\": not found" Apr 17 23:38:07.908429 kubelet[3648]: I0417 23:38:07.908434 3648 scope.go:117] "RemoveContainer" containerID="2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473" Apr 17 23:38:07.908958 containerd[2094]: time="2026-04-17T23:38:07.908874556Z" level=error msg="ContainerStatus for \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\": not found" Apr 17 23:38:07.909325 kubelet[3648]: E0417 23:38:07.909105 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\": not found" containerID="2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473" Apr 17 23:38:07.909325 kubelet[3648]: I0417 23:38:07.909138 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473"} err="failed to get container status \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c6868f20ed96d452f37b98d021e1c935975e2bb169d48b29c3632d6586ab473\": not found" Apr 17 23:38:07.909325 kubelet[3648]: I0417 23:38:07.909167 3648 scope.go:117] "RemoveContainer" containerID="618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea" Apr 17 23:38:07.909489 containerd[2094]: time="2026-04-17T23:38:07.909418069Z" level=error msg="ContainerStatus for \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\": not found" Apr 17 23:38:07.909597 kubelet[3648]: E0417 23:38:07.909551 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\": not found" containerID="618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea" Apr 17 23:38:07.909597 kubelet[3648]: I0417 23:38:07.909581 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea"} err="failed to get container status \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\": rpc error: code = NotFound desc = an error occurred when try to find container \"618956a7bc1d2c6f045d5a4c57f15392001cfaccff935d3823439804be1f3bea\": not found" Apr 17 23:38:07.909698 kubelet[3648]: I0417 23:38:07.909601 3648 scope.go:117] "RemoveContainer" containerID="0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335" Apr 17 23:38:07.909964 containerd[2094]: time="2026-04-17T23:38:07.909916616Z" level=error msg="ContainerStatus for \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\": not found" Apr 17 23:38:07.910264 kubelet[3648]: E0417 23:38:07.910080 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\": not found" containerID="0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335" Apr 17 23:38:07.910264 kubelet[3648]: I0417 23:38:07.910102 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335"} err="failed to get container status \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ead00072de2b3a1ca4f2ddea4904f4cc91298eb478241391c56c338ef2a8335\": not found" Apr 17 23:38:07.910264 kubelet[3648]: I0417 23:38:07.910117 3648 scope.go:117] "RemoveContainer" containerID="8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc" Apr 17 23:38:07.910713 kubelet[3648]: E0417 23:38:07.910421 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\": not found" containerID="8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc" Apr 17 23:38:07.910713 kubelet[3648]: I0417 23:38:07.910438 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc"} err="failed to get container status \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\": not found" Apr 17 23:38:07.910713 kubelet[3648]: I0417 23:38:07.910452 3648 scope.go:117] "RemoveContainer" containerID="0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c" Apr 17 23:38:07.910828 containerd[2094]: time="2026-04-17T23:38:07.910300639Z" level=error msg="ContainerStatus for \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dac32d7c0a5bd9b9af80423199ec39178a3d2419b00c55a76a1656a322a9fbc\": not found" Apr 17 23:38:07.911593 containerd[2094]: time="2026-04-17T23:38:07.911563475Z" level=info msg="RemoveContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\"" Apr 17 23:38:07.916768 containerd[2094]: time="2026-04-17T23:38:07.916730646Z" level=info msg="RemoveContainer for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" returns successfully" Apr 17 23:38:07.916972 kubelet[3648]: I0417 23:38:07.916950 3648 scope.go:117] "RemoveContainer" containerID="0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c" Apr 17 23:38:07.917322 containerd[2094]: time="2026-04-17T23:38:07.917293517Z" level=error msg="ContainerStatus for \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\": not found" Apr 17 23:38:07.917554 kubelet[3648]: E0417 23:38:07.917519 3648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\": not found" containerID="0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c" Apr 17 23:38:07.917630 kubelet[3648]: I0417 23:38:07.917564 3648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c"} err="failed to get container status \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5f3b9fab55a81d71a5b914a3f6cf9423c335515e0762eb5c866dd23bed47c\": not found" Apr 17 23:38:08.387209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01-rootfs.mount: Deactivated successfully. Apr 17 23:38:08.387408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3-rootfs.mount: Deactivated successfully. Apr 17 23:38:08.387547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3-shm.mount: Deactivated successfully. Apr 17 23:38:08.387693 systemd[1]: var-lib-kubelet-pods-3694d85c\x2d3026\x2d441e\x2da9ac\x2d335fc3ba1b45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlxdjt.mount: Deactivated successfully. Apr 17 23:38:08.387862 systemd[1]: var-lib-kubelet-pods-17852fc7\x2de091\x2d4b82\x2d95f4\x2d4543d8860eea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffz8h.mount: Deactivated successfully. Apr 17 23:38:08.388008 systemd[1]: var-lib-kubelet-pods-3694d85c\x2d3026\x2d441e\x2da9ac\x2d335fc3ba1b45-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:38:08.388149 systemd[1]: var-lib-kubelet-pods-3694d85c\x2d3026\x2d441e\x2da9ac\x2d335fc3ba1b45-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:38:09.436241 kubelet[3648]: I0417 23:38:09.436184 3648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17852fc7-e091-4b82-95f4-4543d8860eea" path="/var/lib/kubelet/pods/17852fc7-e091-4b82-95f4-4543d8860eea/volumes" Apr 17 23:38:09.436872 kubelet[3648]: I0417 23:38:09.436811 3648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3694d85c-3026-441e-a9ac-335fc3ba1b45" path="/var/lib/kubelet/pods/3694d85c-3026-441e-a9ac-335fc3ba1b45/volumes" Apr 17 23:38:09.446423 sshd[5219]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:09.449473 systemd[1]: sshd@20-172.31.25.61:22-20.229.252.112:36688.service: Deactivated successfully. Apr 17 23:38:09.456155 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:38:09.456467 systemd-logind[2063]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:38:09.458405 systemd-logind[2063]: Removed session 21. Apr 17 23:38:09.610935 systemd[1]: Started sshd@21-172.31.25.61:22-20.229.252.112:43132.service - OpenSSH per-connection server daemon (20.229.252.112:43132). Apr 17 23:38:10.334463 ntpd[2049]: Deleting interface #10 lxc_health, fe80::2481:7bff:feed:9a2e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Apr 17 23:38:10.334855 ntpd[2049]: 17 Apr 23:38:10 ntpd[2049]: Deleting interface #10 lxc_health, fe80::2481:7bff:feed:9a2e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Apr 17 23:38:10.642354 sshd[5387]: Accepted publickey for core from 20.229.252.112 port 43132 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:10.643898 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:10.649256 systemd-logind[2063]: New session 22 of user core. Apr 17 23:38:10.655661 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:38:11.559407 kubelet[3648]: E0417 23:38:11.559358 3648 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:38:11.848336 kubelet[3648]: I0417 23:38:11.848185 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-bpf-maps\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848336 kubelet[3648]: I0417 23:38:11.848255 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-hostproc\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848336 kubelet[3648]: I0417 23:38:11.848280 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdvqf\" (UniqueName: \"kubernetes.io/projected/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-kube-api-access-gdvqf\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848336 kubelet[3648]: I0417 23:38:11.848306 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-cilium-cgroup\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848336 kubelet[3648]: I0417 23:38:11.848327 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-etc-cni-netd\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848357 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-cilium-run\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848380 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-cilium-ipsec-secrets\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848413 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-host-proc-sys-net\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848435 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-host-proc-sys-kernel\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848462 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-cni-path\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848649 kubelet[3648]: I0417 23:38:11.848482 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-hubble-tls\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848827 kubelet[3648]: I0417 23:38:11.848511 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-xtables-lock\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848827 kubelet[3648]: I0417 23:38:11.848537 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-clustermesh-secrets\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848827 kubelet[3648]: I0417 23:38:11.848562 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-lib-modules\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.848827 kubelet[3648]: I0417 23:38:11.848584 3648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f570a71b-35da-4f1f-9972-cdcf66bc0d8a-cilium-config-path\") pod \"cilium-tdp2b\" (UID: \"f570a71b-35da-4f1f-9972-cdcf66bc0d8a\") " pod="kube-system/cilium-tdp2b" Apr 17 23:38:11.929852 sshd[5387]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:11.934143 systemd[1]: sshd@21-172.31.25.61:22-20.229.252.112:43132.service: Deactivated successfully. Apr 17 23:38:11.939065 systemd-logind[2063]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:38:11.939277 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:38:11.941825 systemd-logind[2063]: Removed session 22. Apr 17 23:38:12.047961 containerd[2094]: time="2026-04-17T23:38:12.047911923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdp2b,Uid:f570a71b-35da-4f1f-9972-cdcf66bc0d8a,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:12.081538 containerd[2094]: time="2026-04-17T23:38:12.081405724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:12.081947 containerd[2094]: time="2026-04-17T23:38:12.081470438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:12.081947 containerd[2094]: time="2026-04-17T23:38:12.081605749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:12.081947 containerd[2094]: time="2026-04-17T23:38:12.081724530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:12.105822 systemd[1]: Started sshd@22-172.31.25.61:22-20.229.252.112:43142.service - OpenSSH per-connection server daemon (20.229.252.112:43142). Apr 17 23:38:12.136863 containerd[2094]: time="2026-04-17T23:38:12.136831024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdp2b,Uid:f570a71b-35da-4f1f-9972-cdcf66bc0d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\"" Apr 17 23:38:12.145853 containerd[2094]: time="2026-04-17T23:38:12.145695203Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:38:12.170582 containerd[2094]: time="2026-04-17T23:38:12.170530599Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8078ae1d3397c9a15d5d2a1c35bda17b3bbc0b494358f783aa5d358c0d4bf66f\"" Apr 17 23:38:12.171586 containerd[2094]: time="2026-04-17T23:38:12.171248004Z" level=info msg="StartContainer for \"8078ae1d3397c9a15d5d2a1c35bda17b3bbc0b494358f783aa5d358c0d4bf66f\"" Apr 17 23:38:12.228061 containerd[2094]: time="2026-04-17T23:38:12.228017066Z" level=info msg="StartContainer for \"8078ae1d3397c9a15d5d2a1c35bda17b3bbc0b494358f783aa5d358c0d4bf66f\" returns successfully" Apr 17 23:38:12.347653 containerd[2094]: time="2026-04-17T23:38:12.347405381Z" level=info msg="shim disconnected" id=8078ae1d3397c9a15d5d2a1c35bda17b3bbc0b494358f783aa5d358c0d4bf66f namespace=k8s.io Apr 17 23:38:12.347653 containerd[2094]: time="2026-04-17T23:38:12.347467388Z" level=warning msg="cleaning up after shim disconnected" id=8078ae1d3397c9a15d5d2a1c35bda17b3bbc0b494358f783aa5d358c0d4bf66f namespace=k8s.io Apr 17 23:38:12.347653 containerd[2094]: time="2026-04-17T23:38:12.347481068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:12.850129 containerd[2094]: time="2026-04-17T23:38:12.849967326Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:38:12.870044 containerd[2094]: time="2026-04-17T23:38:12.869997690Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6\"" Apr 17 23:38:12.872446 containerd[2094]: time="2026-04-17T23:38:12.871479407Z" level=info msg="StartContainer for \"d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6\"" Apr 17 23:38:12.932545 containerd[2094]: time="2026-04-17T23:38:12.932502227Z" level=info msg="StartContainer for \"d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6\" returns successfully" Apr 17 23:38:12.993677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6-rootfs.mount: Deactivated successfully. Apr 17 23:38:13.005315 containerd[2094]: time="2026-04-17T23:38:13.005250290Z" level=info msg="shim disconnected" id=d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6 namespace=k8s.io Apr 17 23:38:13.005315 containerd[2094]: time="2026-04-17T23:38:13.005303788Z" level=warning msg="cleaning up after shim disconnected" id=d02fba09934d2ab900ef08d8f693ad93c608cabe5f870f3fff135102bf5d51a6 namespace=k8s.io Apr 17 23:38:13.005315 containerd[2094]: time="2026-04-17T23:38:13.005316412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:13.138990 sshd[5431]: Accepted publickey for core from 20.229.252.112 port 43142 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:13.140780 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:13.145890 systemd-logind[2063]: New session 23 of user core. Apr 17 23:38:13.153543 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:38:13.845103 sshd[5431]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:13.848362 containerd[2094]: time="2026-04-17T23:38:13.845805092Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:38:13.851700 systemd[1]: sshd@22-172.31.25.61:22-20.229.252.112:43142.service: Deactivated successfully. Apr 17 23:38:13.864418 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:38:13.868289 systemd-logind[2063]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:38:13.869858 systemd-logind[2063]: Removed session 23. Apr 17 23:38:13.881126 containerd[2094]: time="2026-04-17T23:38:13.881079388Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74\"" Apr 17 23:38:13.881979 containerd[2094]: time="2026-04-17T23:38:13.881938340Z" level=info msg="StartContainer for \"c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74\"" Apr 17 23:38:13.946313 containerd[2094]: time="2026-04-17T23:38:13.946232538Z" level=info msg="StartContainer for \"c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74\" returns successfully" Apr 17 23:38:14.004528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74-rootfs.mount: Deactivated successfully. Apr 17 23:38:14.017505 systemd[1]: Started sshd@23-172.31.25.61:22-20.229.252.112:43150.service - OpenSSH per-connection server daemon (20.229.252.112:43150). Apr 17 23:38:14.039366 containerd[2094]: time="2026-04-17T23:38:14.039305090Z" level=info msg="shim disconnected" id=c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74 namespace=k8s.io Apr 17 23:38:14.039366 containerd[2094]: time="2026-04-17T23:38:14.039363307Z" level=warning msg="cleaning up after shim disconnected" id=c22708f0bf67aa52f54ac163321197af0664bd89de3d828970ef532dc1103b74 namespace=k8s.io Apr 17 23:38:14.039366 containerd[2094]: time="2026-04-17T23:38:14.039373994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:14.743977 kubelet[3648]: I0417 23:38:14.743683 3648 setters.go:618] "Node became not ready" node="ip-172-31-25-61" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:38:14Z","lastTransitionTime":"2026-04-17T23:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:38:14.849870 containerd[2094]: time="2026-04-17T23:38:14.848936947Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:38:14.876132 containerd[2094]: time="2026-04-17T23:38:14.876089427Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23\"" Apr 17 23:38:14.878069 containerd[2094]: time="2026-04-17T23:38:14.876936709Z" level=info msg="StartContainer for \"c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23\"" Apr 17 23:38:14.940509 containerd[2094]: time="2026-04-17T23:38:14.940440416Z" level=info msg="StartContainer for \"c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23\" returns successfully" Apr 17 23:38:14.962253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23-rootfs.mount: Deactivated successfully. Apr 17 23:38:14.965783 containerd[2094]: time="2026-04-17T23:38:14.965721900Z" level=info msg="shim disconnected" id=c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23 namespace=k8s.io Apr 17 23:38:14.965963 containerd[2094]: time="2026-04-17T23:38:14.965801787Z" level=warning msg="cleaning up after shim disconnected" id=c1f9ed0df07bb9a60fc7dbfb1bef8c98cf3ef8723592922a74b9d54a8b384b23 namespace=k8s.io Apr 17 23:38:14.965963 containerd[2094]: time="2026-04-17T23:38:14.965815938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:15.032847 sshd[5621]: Accepted publickey for core from 20.229.252.112 port 43150 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:15.033582 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:15.039015 systemd-logind[2063]: New session 24 of user core. Apr 17 23:38:15.042519 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:38:15.434250 kubelet[3648]: E0417 23:38:15.433993 3648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wtgkq" podUID="afba6cf2-1f0b-4aff-afbb-3b234ab8cf3d" Apr 17 23:38:15.854609 containerd[2094]: time="2026-04-17T23:38:15.854404625Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:38:15.880003 containerd[2094]: time="2026-04-17T23:38:15.879959673Z" level=info msg="CreateContainer within sandbox \"774c3f21e580ba10fc2f276df83e9b6a5ac4f4e42486e6e22ffac2093c52028f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ef8017d6c09fd08e22039bb4e42dd53de84233ad2a7275ec560c60a7bff4929\"" Apr 17 23:38:15.881348 containerd[2094]: time="2026-04-17T23:38:15.880534481Z" level=info msg="StartContainer for \"3ef8017d6c09fd08e22039bb4e42dd53de84233ad2a7275ec560c60a7bff4929\"" Apr 17 23:38:15.946342 containerd[2094]: time="2026-04-17T23:38:15.946284356Z" level=info msg="StartContainer for \"3ef8017d6c09fd08e22039bb4e42dd53de84233ad2a7275ec560c60a7bff4929\" returns successfully" Apr 17 23:38:15.983211 systemd[1]: run-containerd-runc-k8s.io-3ef8017d6c09fd08e22039bb4e42dd53de84233ad2a7275ec560c60a7bff4929-runc.xyG279.mount: Deactivated successfully. Apr 17 23:38:16.969274 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:38:19.942708 (udev-worker)[5777]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:19.943511 (udev-worker)[6260]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:19.948350 systemd-networkd[1649]: lxc_health: Link UP Apr 17 23:38:19.951240 systemd-networkd[1649]: lxc_health: Gained carrier Apr 17 23:38:20.087225 kubelet[3648]: I0417 23:38:20.087148 3648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tdp2b" podStartSLOduration=9.087126073 podStartE2EDuration="9.087126073s" podCreationTimestamp="2026-04-17 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:16.907610371 +0000 UTC m=+115.737612461" watchObservedRunningTime="2026-04-17 23:38:20.087126073 +0000 UTC m=+118.917128138" Apr 17 23:38:21.385399 containerd[2094]: time="2026-04-17T23:38:21.385353949Z" level=info msg="StopPodSandbox for \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\"" Apr 17 23:38:21.385906 containerd[2094]: time="2026-04-17T23:38:21.385461210Z" level=info msg="TearDown network for sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" successfully" Apr 17 23:38:21.385906 containerd[2094]: time="2026-04-17T23:38:21.385477535Z" level=info msg="StopPodSandbox for \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" returns successfully" Apr 17 23:38:21.388218 containerd[2094]: time="2026-04-17T23:38:21.386401347Z" level=info msg="RemovePodSandbox for \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\"" Apr 17 23:38:21.391758 containerd[2094]: time="2026-04-17T23:38:21.391719162Z" level=info msg="Forcibly stopping sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\"" Apr 17 23:38:21.391876 containerd[2094]: time="2026-04-17T23:38:21.391824084Z" level=info msg="TearDown network for sandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" successfully" Apr 17 23:38:21.394421 systemd-networkd[1649]: lxc_health: Gained IPv6LL Apr 17 23:38:21.404622 containerd[2094]: time="2026-04-17T23:38:21.402829208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:21.404622 containerd[2094]: time="2026-04-17T23:38:21.402910656Z" level=info msg="RemovePodSandbox \"7eecd7312bdb2a8cd9f8726487b04eff1ea0c4f4447c6c80be5499b2e227da01\" returns successfully" Apr 17 23:38:21.405424 containerd[2094]: time="2026-04-17T23:38:21.405388585Z" level=info msg="StopPodSandbox for \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\"" Apr 17 23:38:21.405507 containerd[2094]: time="2026-04-17T23:38:21.405490230Z" level=info msg="TearDown network for sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" successfully" Apr 17 23:38:21.405564 containerd[2094]: time="2026-04-17T23:38:21.405505799Z" level=info msg="StopPodSandbox for \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" returns successfully" Apr 17 23:38:21.407222 containerd[2094]: time="2026-04-17T23:38:21.406257188Z" level=info msg="RemovePodSandbox for \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\"" Apr 17 23:38:21.407725 containerd[2094]: time="2026-04-17T23:38:21.407696635Z" level=info msg="Forcibly stopping sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\"" Apr 17 23:38:21.408102 containerd[2094]: time="2026-04-17T23:38:21.407972892Z" level=info msg="TearDown network for sandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" successfully" Apr 17 23:38:21.420319 containerd[2094]: time="2026-04-17T23:38:21.420235373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:21.420464 containerd[2094]: time="2026-04-17T23:38:21.420350372Z" level=info msg="RemovePodSandbox \"3830e9bf4ac6ccbebfc78ef89f4b25ac374112866c5ee98e6a10c2c92af509f3\" returns successfully" Apr 17 23:38:22.612293 systemd[1]: run-containerd-runc-k8s.io-3ef8017d6c09fd08e22039bb4e42dd53de84233ad2a7275ec560c60a7bff4929-runc.Du3kls.mount: Deactivated successfully. Apr 17 23:38:24.334573 ntpd[2049]: Listen normally on 13 lxc_health [fe80::6009:7dff:fefd:aac0%14]:123 Apr 17 23:38:24.335119 ntpd[2049]: 17 Apr 23:38:24 ntpd[2049]: Listen normally on 13 lxc_health [fe80::6009:7dff:fefd:aac0%14]:123 Apr 17 23:38:25.132283 sshd[5621]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:25.137570 systemd-logind[2063]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:38:25.142520 systemd[1]: sshd@23-172.31.25.61:22-20.229.252.112:43150.service: Deactivated successfully. Apr 17 23:38:25.154113 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:38:25.156675 systemd-logind[2063]: Removed session 24. Apr 17 23:38:39.296298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249-rootfs.mount: Deactivated successfully. Apr 17 23:38:39.326388 containerd[2094]: time="2026-04-17T23:38:39.326020956Z" level=info msg="shim disconnected" id=0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249 namespace=k8s.io Apr 17 23:38:39.326388 containerd[2094]: time="2026-04-17T23:38:39.326077208Z" level=warning msg="cleaning up after shim disconnected" id=0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249 namespace=k8s.io Apr 17 23:38:39.326388 containerd[2094]: time="2026-04-17T23:38:39.326089131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:39.910074 kubelet[3648]: I0417 23:38:39.909247 3648 scope.go:117] "RemoveContainer" containerID="0f4022f87aa04561a3cb882b80ab48accb19b4ea8e317b542456a71f16ef3249" Apr 17 23:38:39.912364 containerd[2094]: time="2026-04-17T23:38:39.912308564Z" level=info msg="CreateContainer within sandbox \"3c4c66deeb9fa1c1b167b159db79ca76227fdaff6308837c0f42c0f335d7a999\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:38:39.934385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62822621.mount: Deactivated successfully. Apr 17 23:38:39.941503 containerd[2094]: time="2026-04-17T23:38:39.941461676Z" level=info msg="CreateContainer within sandbox \"3c4c66deeb9fa1c1b167b159db79ca76227fdaff6308837c0f42c0f335d7a999\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0c4ff84c09cc4de2c69f5b7717cb1eaee1e5a83b53216c2660d499b9837fa066\"" Apr 17 23:38:39.942064 containerd[2094]: time="2026-04-17T23:38:39.942025655Z" level=info msg="StartContainer for \"0c4ff84c09cc4de2c69f5b7717cb1eaee1e5a83b53216c2660d499b9837fa066\"" Apr 17 23:38:40.021303 containerd[2094]: time="2026-04-17T23:38:40.021251401Z" level=info msg="StartContainer for \"0c4ff84c09cc4de2c69f5b7717cb1eaee1e5a83b53216c2660d499b9837fa066\" returns successfully" Apr 17 23:38:40.295528 systemd[1]: run-containerd-runc-k8s.io-0c4ff84c09cc4de2c69f5b7717cb1eaee1e5a83b53216c2660d499b9837fa066-runc.oBtANr.mount: Deactivated successfully. Apr 17 23:38:43.901639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e-rootfs.mount: Deactivated successfully. Apr 17 23:38:43.923224 containerd[2094]: time="2026-04-17T23:38:43.922341831Z" level=info msg="shim disconnected" id=8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e namespace=k8s.io Apr 17 23:38:43.923224 containerd[2094]: time="2026-04-17T23:38:43.922400102Z" level=warning msg="cleaning up after shim disconnected" id=8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e namespace=k8s.io Apr 17 23:38:43.923224 containerd[2094]: time="2026-04-17T23:38:43.922412021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:44.419205 kubelet[3648]: E0417 23:38:44.419128 3648 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-61?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 23:38:44.939650 kubelet[3648]: I0417 23:38:44.939614 3648 scope.go:117] "RemoveContainer" containerID="8375737e2792a80007c6589ea606a0b6fc54744e04d258f034235ca8d5cd1c3e" Apr 17 23:38:44.941932 containerd[2094]: time="2026-04-17T23:38:44.941894449Z" level=info msg="CreateContainer within sandbox \"dfb0d54c07cdfc132787be2915b9badcf5cbaeac8419c6d0383ab765f5b7d3df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:38:44.967473 containerd[2094]: time="2026-04-17T23:38:44.967416970Z" level=info msg="CreateContainer within sandbox \"dfb0d54c07cdfc132787be2915b9badcf5cbaeac8419c6d0383ab765f5b7d3df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b29db1c93f6c30fa87af098c04eb870f4605f833742f6d8434369f362922b2dc\"" Apr 17 23:38:44.968136 containerd[2094]: time="2026-04-17T23:38:44.968103176Z" level=info msg="StartContainer for \"b29db1c93f6c30fa87af098c04eb870f4605f833742f6d8434369f362922b2dc\"" Apr 17 23:38:45.046911 containerd[2094]: time="2026-04-17T23:38:45.046862572Z" level=info msg="StartContainer for \"b29db1c93f6c30fa87af098c04eb870f4605f833742f6d8434369f362922b2dc\" returns successfully"